Archive for Privacy

Analytics Suggest 96% Of U.S. Based iOS Users Leave App Tracking Disabled in iOS 14.5

Posted in Commentary with tags , on May 7, 2021 by itnerd

Apple’s App Tracking Transparency feature has been available to iPhone users for a couple of weeks now. And early metrics suggest that an overwhelming 96% of users in the U.S. leave app tracking disabled. In other words, 96% of iPhone users do not want to be tracked at all. This comes from analytics firm Flurry who looked at 2.5 million users in the U.S.

In short, only 4% of users opted into app tracking in the U.S. When looking at users worldwide who allow app tracking, the figure rises to 12% of users in a 5.3 million user sample size. Flurry’s figures also show a stable rate of app-tracking opt-outs. The U.S. figure hovers between 11-13%, and 2-5% worldwide.

Flurry themselves point out what is at stake here:

With opt-in rates expected to be low, this change is expected to create challenges for personalized advertising and attribution, impacting the $189 billion mobile advertising industry worldwide. 

In other words, if you’re Facebook, and your revenue model relies on being able to track users all over the Internet, you have a serious problem. And it highlights that users on the iOS platform overwhelmingly value their privacy above all else.

If you want to learn more about App Tracking Transparency and how you can disable it or enable it on an app by app basis, I wrote an article about it here.

App Privacy Study Looks At Most ‘Invasive’ Apps Collecting User Data… Guess Who Is Number One And Number Two?

Posted in Commentary with tags on March 17, 2021 by itnerd

Yesterday, I came across a company called pCloud who earlier this month took a look at the most “invasive” apps that collect the most data from users and shares it with third parties. You can guess who was the most invasive:

Every time you search for a video on YouTube, 42% of your personal data is sent elsewhere. This data goes on to inform the types of adverts you’ll see before and during videos, as well as being sold to brands who’ll target you on other social media platforms. Instagram shares 79% of your data including browsing history and personal information with others online.

YouTube isn’t the worst when it comes to selling your information on. That award goes to Instagram, which shares a staggering 79% of your data with other companies. Including everything from purchasing information, personal data, and browsing history. No wonder there’s so much promoted content on your feed.

With over 1 billion monthly active users it’s worrying that Instagram is a hub for sharing such a high amount of its unknowing users’ data.

Remember, Instagram is owned by Facebook. And Facebook was number two on this list as noted below. So read into that what you will:

  • Instagram collects 79 percent of personal data
  • Facebook collects 57 percent
  • LinkedIn and Uber Eats both were caught collecting 50 percent of data.
  • YouTube and YouTube Music were found to be collecting 43 percent of personal data to share with third parties.

So if you have any of these apps on your phone, you now know your data is being vacuumed up like a maid using a Hoover. On the other end of the spectrum, apps that don’t collect much data include Signal, Clubhouse, Netflix, Shazam, Etsy, Skype, and Telegram. But this will change for iOS users shortly when iOS 14.5 is released where Apple will begin requiring apps that access a user’s advertising identifier for cross-app and website tracking to get express permission before using it, which may help cut down on some of the third-party data sharing. But this report alone may get some of the companies on this list to alter their behavior. By some, I mean any company not named Facebook who simply doesn’t care about your privacy.

Apple & Google To Ban Apps Using Location Tracking Tech From X-Mode If Devs Don’t Remove The Tracking Tech

Posted in Commentary with tags , , on December 10, 2020 by itnerd

Have you heard of a company called X-Mode? Chances are you haven’t. But it is likely that your apps on your phone use their tech. Here’s how it works. X-Mode obtains location data from apps on the App Store and Google Play Store and sells that information to contractors associated with the U.S. military and national security industry.


Both Apple and Google are now taking steps to ban apps with X-Mode tracking tech in them says The Wall Street Journal:

The Journal reported last month that X-Mode was collecting data from phones running its software about nearby “Internet of Things” devices such as fitness trackers and automobiles. That data was being made available to a company called SignalFrame that had received a small grant from the military and had been trying to win other national security-related contracts.

In addition, Vice News reported last month that X-Mode drew some of its location information from apps with a predominantly Muslim user base, such as a dating app called Muslim Mingle and a prayer app called Muslim Pro, though the company also has software embedded in many other kinds of apps.

In response to questions from the Journal, X-Mode said it was re-evaluating its government work and that its contracts prevent anyone from linking a device to personal information such as a name, address or email address.

That didn’t make Apple and Google happy. Google developers have seven days, while Apple is reportedly giving their developers two weeks. If they fail to meet those targets, the apps get banned. Some developers want Apple and Google to reconsider this. But I don’t see either company changing their minds. Nor should they. There is clearly something sketchy going on here and it is good to see both Apple and Google taking action to protect their users.

Google Home Speakers May Have Been Recording Sounds Without Your Permission

Posted in Commentary with tags , on August 10, 2020 by itnerd

Your Google Home speaker may have been quietly recording sounds around your house without your permission or authorization it was revealed this week.

A Google spokesperson told Protocol that the feature was accidentally enabled for some users through a recent software update and has since been rolled back. But in light of Monday’s news that Google invested $450 million — acquiring a 6.6% stake — in home security provider ADT, it may be a sign of things to come for Google, as it hints at the company’s secret home security superpower: millions of smart speakers already in people’s homes.

There have been a few people on Reddit who have discovered this, and frankly this bothers me. And it should bother you. I can see how this could be a powerful feature. But I can also see how this could become a privacy nightmare. Google really needs to come clean in terms of this, and give users of these smart speakers the info that they need to give them the ability to have the privacy that they need.

Pizza Pizza Gave Cops Customer’s Personal Info Without A Warrant….. Wow

Posted in Commentary with tags on July 4, 2020 by itnerd

One of the biggest pizza operations in Canada is Pizza Pizza. But they seem to have dropped themselves into it big time as the Toronto Star has discovered. Apparently according to the Toronto Star, Toronto Police while investigating gangs in the city, trolled customer information using Pizza Pizza’s databases without going for a warrant first. And Pizza Pizza simply complied:

Amid a major criminal investigation that announced dozens of arrests last year, the popular pizza chain voluntarily searched its internal data and handed over customers’ personal information to Toronto police investigators, the Star has learned. 

Officers used the technique in Project Kraken, an investigation into guns, gangs and drugs that resulted in more than 70 arrests last June. Seven of those charged were tow truck operators, police said at the time. The accused are awaiting trial.

During the investigation, Toronto police obtained telephone numbers from phone intercepts. Officers then took those numbers to Pizza Pizza to get any matching customers’ names.

None of the accused were identified using the technique. Rather, it was used to identify people associated with targets of the investigation. It is not clear how many people were identified. 

Two sources connected to the case told the Star police used the technique. The Star is not identifying the sources because they can’t publicly speak about material that is part of disclosure.

Well, that is something that doesn’t sit well with me. And I am sure that if you have used Pizza Pizza it doesn’t sit well with you either. The cops won’t say anything about this. But Pizza Pizza says this:

The Star also asked Pizza Pizza for comment. In an emailed response, the company said it is “committed to protecting the Personal Information provided by customers. Our Privacy Policy details how we protect and use customer data in the fulfilment of customer orders.”

Pizza Pizza’s privacy policy states the company “reserves the right to access and/or disclose Personal Information where required to comply with applicable laws or lawful government requests.”

Customers, the policy states, consent with every order made to the “collection, use and disclosure of your information by Pizza Pizza in accordance with” its policy terms.

So, what that tells me is the following:

  1. If you use Pizza Pizza’s online services to order a pizza for pickup or delivery. Then your personal information could be subject to being handed over to someone in law enforcement of government.
  2. Pizza Pizza clearly doesn’t require a warrant to hand that info over.

I have the Pizza Pizza app on my iPhone, and this is enough for me to delete it off my iPhone and not use their services again. Now Pizza Pizza may not care about that. But they will care about this:

In response to Star queries, a spokesperson for the Office of the Privacy Commissioner of Canada said the office could not comment on this specific scenario involving Pizza Pizza because it had not examined it “in detail.”

I read that as this might get a serious look by the Ontario Privacy Commissioner. And I hope it does as this behavior by Pizza Pizza is simply unacceptable.

TikTok Doesn’t Belong On Your Phone Because It Is A Privacy & Security Nightmare Says Security Researcher

Posted in Commentary with tags , , on July 3, 2020 by itnerd

According to a security researcher who posted to Reddit, TikTok is one app that if you value your privacy and security, you need to delete ASAP. Here’s why:

TikTok is a data collection service that is thinly-veiled as a social network. If there is an API to get information on you, your contacts, or your device… well, they’re using it.

  • Phone hardware (cpu type, number of course, hardware ids, screen dimensions, dpi, memory usage, disk space, etc)
  • Other apps you have installed (I’ve even seen some I’ve deleted show up in their analytics payload – maybe using as cached value?)
  • Everything network-related (ip, local ip, router mac, your mac, wifi access point name)
  • Whether or not you’re rooted/jailbroken
  • Some variants of the app had GPS pinging enabled at the time, roughly once every 30 seconds – this is enabled by default if you ever location-tag a post IIRC
  • They set up a local proxy server on your device for “transcoding media”, but that can be abused very easily as it has zero authentication

The stuff that I’ve listed above is pretty bad. But it gets worse:

Here’s the thing though.. they don’t want you to know how much information they’re collecting on you, and the security implications of all of that data in one place, en masse, are f**king huge. They encrypt all of the analytics requests with an algorithm that changes with every update (at the very least the keys change) just so you can’t see what they’re doing. They also made it so you cannot use the app at all if you block communication to their analytics host off at the DNS-level.

For what it’s worth I’ve reversed the Instagram, Facebook, Reddit, and Twitter apps. They don’t collect anywhere near the same amount of data that TikTok does, and they sure as hell aren’t outright trying to hide exactly whats being sent like TikTok is. It’s like comparing a cup of water to the ocean – they just don’t compare.

This is just downright scary. And this Reddit thread is gaining attention. Security company Zimperium had its own look at TikTok and it says its a security risk. Anonymous has said to “delete this Chinese spyware now.” The Pentagon advises that TikTok should be deleted from phones. Something that the US Army has taken heed of. And while this likely has more to do with a border issue between China and India, the latter has banned a pile of Chinese apps, which includes TikTok.

The point is that it’s pretty clear that TikTok is a security risk of epic proportions. If you value your security, I would read the Reddit thread and then make your own decision as to if TikTok deserves a place on your smartphone. Or your kids smartphone for that matter.

Canada Announces National Contact Tracing App…. What Are The Security And Privacy Concerns?

Posted in Commentary with tags , on June 19, 2020 by itnerd

Yesterday Prime Minister Justin Trudeau announced the federal government will begin testing a “completely voluntary” contact tracing app that can be used nationwide. You can get more details here. Every since that announcement concerns around security and privacy controls started to become top of mind. David Masson, Director of Enterprise Security for Darktrace shared with me his security concerns that are associated with contact tracing:

The debate over a centralized or a decentralized approach while using contact tracing apps continues. A decentralized approach would mean that the data stays on an individual’s phone, while a centralized one would mean that all the data from the app goes to one central body. Both approaches have their own merits.

In Canada, a unified approach to contact tracing led by the Federal Government, rather than by the individual Provinces and Territories, will relieve the Provinces and Territories of some legal and financial ramifications. A unified effort would also ensure a more collaborative process for building in security and privacy controls, and it would be more efficient for decision making. As the Federal Government makes declared decisions about the app and its development, security needs to remain a priority.  A centralized approach, however, needs to come with caveats and protections.

If it is the Federal Government ensuring that a sick person remains isolated and enforcing quarantine, there will be privacy trade-offs. We must be prepared for the future: what should we do with the data after this crisis is finally said and done? Sunset clauses should be put in place to assure the Canadian public that the highest consideration will be taken and that there will be transparency about what happens once the data is no longer needed. 

With regard to the collection of data centrally, scientists and health officials could leverage the data for good. They could use data from the apps to analyze how the virus spreads, how it impacts society, and more, which would improve our ability to deal with the outbreak. However, the Federal Government will need to ensure that any data shared for research is secure.

There will also need to be the ability to have some form of open and transparent redress for all citizens with regard to any contact tracing approach in Canada.

I then asked about the fact that this app will utilize the Apple/Google Exposure Notification API. You can find out more info about that here. The Apple/Google API is billed as best in class when it comes to privacy.does So my question was if the usage of this API made things safer? 

I think the question isn’t is it ‘safe’, but does it makes things more secure? Maybe, maybe not.

Privacy and security are not the same things. Privacy is about personal control of your own data, in particular your identity. Security is the tools that will help you control your data and some tools are better than others. Quite frequently when tools or applications are rushed to market without adequate testing, security vulnerabilities subsequently appear.

When rolling out an application that could be used by so many members of the population, governments should use the best available technology with the lowest risk for security or privacy concerns. However, even then it’s impossible to say that without a doubt an application is or is not safe and important to remember that ‘safe’ can mean different things in different contexts. 

For it to be a ‘safe’ application, the technology needs to be implemented correctly, and the app needs to be shut off when the pandemic is over. History has shown that both of these assumptions could prove to be flawed.

That’s an interesting view as reading over the details related to the Apple/Google Exposure Notification API would have had me assume that there was nothing to worry about. But clearly from what David Masson has said, I clearly hadn’t considered all the implications of what a contact tracing app like this one are. Thus I thank him for his insights on this. It’s given yours truly, as well as a lot of you a lot to think about.

Here’s How The Last 4 Digits Of Your Credit Card Can Be Used To Commit Fraud

Posted in Commentary with tags , on June 8, 2020 by itnerd

Following up on this story from last week where Bank Of Montreal or BMO was sending marketing material to customers using the last 4 digits of their credit card, I got a few people who emailed me asking what a miscreant can do with four digits of a credit card number.

Actually, quite a bit. The fact is that credit card numbers aren’t just random blocks of 16 digits. There are some mathematical relationships that hold between them. So if a miscreant knows the last four digits and those relationships, that narrows the attack surface considerably. Let me give you an example. If you know the last four digits up front, here’s what a miscreant can do:

  • All Visa cards start with 4 and all MasterCards start with 5, that’s one digit right there.
  • If you know the bank or the card issuer, that’s few more digits.
  • The type of card, be it gold or whatever, that can give you a couple more digits.

That leaves a miscreant with a handful of digits to figure out. Now, I will admit that this is still not a trivial exercise. But from my research on the dark web, this approach is successful way more often than you think. Which to be frank is quite scary. Sure they still have to figure the expiry date and the CCV number on the back of the card. But it is doable.

The fact is that a small amount of personal information can be used to perpetrate some sort of fraud. The information in question can be used to combine information that has been acquired separately. If there’s a large breach on social security numbers (For example, the Equifax hack), and credit card numbers (like some online store hack) you could link those together to perpetrate some sort of fraud. Which is why I put out the story on BMO’s use of the last 4 digits of customer’s credit cards in their marketing. It’s an attack vector. One that while is not easy to take advantage of, it is exploitable. Thus you need to make sure that you’re on the right side of this so that you don’t become the next victim.

On a related note, I have yet to hear back from BMO on my questions related to this topic. That’s a shame and I think it says something about how BMO views this situation.

Why Does BMO Use The Last For Digits Of Your Credit Card For Marketing Purposes?

Posted in Commentary with tags , on June 5, 2020 by itnerd

I became aware of something that I truly find bizarre. One of my PR contacts got some marketing material from the Bank Of Montreal, or better known as BMO. In that marketing material were the last four digits of her credit card number. She found that to be very odd which is why she pinged me on this.

But it doesn’t end there. When she reached out to BMO on Twitter to inquire as to why they were doing this, they said this:

“I can advise that with marketing offer, we ask that you provide certain information, so we can track who is taking advantage of the offers we send out. This information is only used by BMO and not provided to any third parties.”

Here’s my take.

BMO offers MasterCard branded cards and the format of the card number goes something like this:


So if I were some sort of miscreant, having the last 4 digits of a credit card makes life a whole lot easier to guess what a card number might be. Sure it may take effort to get the full card number. And then you have to get the expiry date and perhaps even the CCV (the three digit security code on the back of the card) to exploit the card for fraud. So it would take some work. But it is possible to do. Beyond that, simply having the credit card number can be enough to grab personal information to commit some sort of fraud that isn’t related to going on a spending spree with someone’s credit card.

Both of those outcomes would of course be bad for the customer.

The other thing that I will point out is that there are many ways to track if a customer takes advantage of an offer or not. There are many tools like Pardot which is made by Salesforce for example that can do this transparently. And I am pretty sure that using a credit card number, even a partial one, is not a good way way of doing this. So I was very interested as to why BMO decided to go with using the last four digits to track if a customer takes advantage of an offer. So I decided to ask them.

If I get an answer, I will update this story. But on the surface, this sounds like a bit of a risk to customers. And perhaps BMO needs to take a second look at this, as we live in times where everyone should be risk adverse.

UPDATE: I have a screen shot of the piece of marketing that this person received. I have removed all the personal information and noted where the last four digits of the credit card number is located with the words “Last 4 Digits Of Credit Card Number Above” which of course I have removed.

An Adult Site Exposes User Data…. Which Is Not The Exposure That Users Wanted

Posted in Commentary with tags on May 6, 2020 by itnerd

CAM4, a popular adult platform that advertises “free live sex cams,” misconfigured an ElasticSearch production database so that it was easy to find and view heaps of personally identifiable information, as well as corporate details like fraud and spam detection logs. According to Wired, the database exposed 7 terabytes of names, sexual orientations, payment logs, and email and chat transcripts — 10.88 billions records in all

First of all, very important distinction here: There’s no evidence that CAM4 was hacked, or that the database was accessed by malicious actors. That doesn’t mean it wasn’t, but this is not an Ashley Madison–style meltdown. It’s the difference between leaving the bank vault door wide open (bad) and robbers actually stealing the money (much worse).

The mistake CAM4 made is also not unique. ElasticSearch server goofs have been the cause of countless high-profile data leaks. What typically happens: They’re intended for internal use only, but someone makes a configuration error that leaves it online with no password protection. “It’s a really common experience for me to see a lot of exposed ElasticSearch instances,” says security consultant Bob Diachenko, who has a long history of finding exposed databases. “The only surprise that came out of this is the data that is exposed this time.”

And there’s the rub. The list of data that CAM4 leaked is alarmingly comprehensive. The production logs Safety Detectives found date back to March 16 of this year; in addition to the categories of information mentioned above, they also included country of origin, sign-up dates, device information, language preferences, user names, hashed passwords, and email correspondence between users and the company.

This is not trivial. If you take the adult nature of what this site does out of the equation, this is a massive leak of data that could really have long term consequences for users of this site if this data was accessed. Which there isn’t evidence that it has been accessed. At least not at present. But if we start to see things like targeted attacks and extortion phishing emails start to pop up in users inboxes, then we’ll know that this has gone from bad to worse.