Archive for January, 2026

Check Point Harmony Secure Access Service Edge Has A Critical Local Privilege Escalation Flaw

Posted in Commentary with tags on January 28, 2026 by itnerd

Researchers have uncovered a critical privilege-escalation vulnerability, in Check Point’s Harmony Secure Access Service Edge Windows client software, tracked as CVE-2025-9142, that enables hackers to write or delete files outside the certificate working directory that could compromise systems.

More info can be here: https://blog.amberwolf.com/blog/2026/january/advisory—check-point-harmony-local-privilege-escalation-cve-2025-9142/

Jim Routh, Chief Trust Officer at Saviynt, commented:

“This is an excellent example of the critical need for an enhanced PAM capability (specifically one that includes a continuous identity validation capability). Enterprises should include this in their mandatory requirements for upgrading their PAM capabilities. Privileged Access Management platforms designed for people to control access to other humans is fundamentally obsolete and insufficient for protecting against credential compromise, token compromise and the migration to agents in operation through MCP servers/gateways. It’s a different “ballgame” with different requirements for identity security to be part of the critical path toward responsible use of AI. It’s time to change our PAM requirements and this vulnerability is a reinforcement of this need for enterprises.” 

If you’re not familiar with PAM or Privileged Access Management, here’s a primer from Microsoft. And now would be a good time to have that discussion in order to keep your organization safe.

Today is Data Privacy Day

Posted in Commentary on January 28, 2026 by itnerd

Today is Data Privacy Day, an annual observance dedicated to raising awareness about the importance of protecting personal and sensitive information, helping organizations and individuals maintain trust and security in the digital age.

Privacy experts from Comparitech and Pixel Privacy have provided the following commentary on this subject. 

Brian Higgins, Security Specialist at Comparitech:

“A decade ago, Data Privacy wasn’t on anyone’s radar and ‘sharing’ was the norm. Fast forward past some really awful breaches on nation states, corporations and individuals and we find ourselves concerned and a little fearful that our privacy is at risk from criminals, unscrupulous platforms and businesses, and even the authorities who are supposed to protect and defend us. 

It’s more important than ever to take advantage of initiatives like Data Privacy Day as catalysts to encourage some personal data hygiene practices. Advocate multi-factor authentication on anything that will take it, check platform Privacy settings regularly, purge your online contacts and bin any you don’t recognize, get some mainstream Credit Monitoring if you can afford it and make sure you and those you care about know exactly what to do in a data crisis however big or small. 

Personal responsibility is the best defence these days because nobody else will do it for you. Your data is far too valuable financially, corporately or ideologically for anyone else to be relied upon to protect it for you.”

Chris Hauk, Consumer Privacy Champion at Pixel Privacy:

“As another Data Privacy Day arrives, users need to understand that they need to take personal responsibility when it comes to their privacy. Do not rely on your country’s government to protect you with new rules and regulations; they are really not there to help you. Nor can users rely on the companies they deal with to keep their data private. We have seen thousands of data breached in recent years, exposing just how little organizations know about protecting their customers’ personal info. 

Stay private by using a VPN to hide your travels around the web. It’s no business but your own as to what you’re doing on the internet. 

Take advantage of services that remove your personal information from data brokers and people-finder services. (Manually contacting data brokers is time consuming, and considering there are thousands of these firms out there, it could quickly become your career if you don’t use a removal service.)

Think before you click on links or open attachments found in text messages and emails. Also think before turning over any kind of personal information to an outside party. Be sure to question such requests. Ask them why they need the info, what they’re going to do with the info, and who they’ll be sharing the info with.”

Fake dating app used as lure in spyware campaign targeting Pakistan: ESET

Posted in Commentary with tags on January 28, 2026 by itnerd

ESET researchers have uncovered an Android spyware campaign leveraging romance scam tactics to target individuals in Pakistan. The campaign uses a malicious app posing as a chat platform that allows users to initiate conversations operated via WhatsApp. Underneath the romance charade, the real purpose of the malicious app, which ESET named GhostChat, is exfiltration of the victim’s data. The same threat actor appears to be running a broader spy operation – including a ClickFix attack leading to the compromise of victims’ computers, and a WhatsApp device-linking attack gaining access to victims’ WhatsApp accounts – thus expanding the scope of surveillance. These related attacks used websites impersonating Pakistani governmental organizations as lures. Victims obtained GhostChat from unknown sources, and it requires manual installation; it was never available on Google Play, and Google Play Protect, which is enabled by default, protects against it.

The app uses the icon of a legitimate dating app but lacks the original app’s functionality and instead serves as a lure – and tool – for espionage on mobile devices. Once logged in, victims are presented with a selection of 14 female profiles; each profile is linked to a specific WhatsApp number with a Pakistani (+92) country code. The use of local numbers reinforces the illusion that the profiles are real individuals based in Pakistan, increasing the credibility of the scam. Upon entering the correct code, the app redirects the user to WhatsApp to initiate a conversation with the assigned number – presumably operated by the threat actor.

While the victim engages with the app, and even prior to logging in, GhostChat spyware has already begun running in the background, silently monitoring device activity and exfiltrating sensitive data to a C&C server. Beyond initial exfiltration, GhostChat engages in active espionage: It sets up a content observer to monitor newly created images and uploads them as they appear. Additionally, it schedules a periodic task that scans for new documents every five minutes, ensuring continual surveillance and data harvesting.

The campaign is also connected to broader infrastructure involving ClickFix-based malware delivery and WhatsApp account hijacking techniques. These operations leverage fake websites, impersonation of national authorities, and deceptive, QR-code-based device-linking to compromise both desktop and mobile platforms. ClickFix is a social engineering technique that tricks users into manually executing malicious code on their devices by following seemingly legitimate instructions.

In addition to desktop targeting via the ClickFix attack, a malicious domain was used in a mobile-focused operation aimed at WhatsApp users. Victims were lured into joining a supposed community – posing as a channel of the Pakistan Ministry of Defence – by scanning a QR code to link their Android device or iPhone to WhatsApp Web or Desktop. Known as GhostPairing, this technique allows an adversary to gain access to the victims’ chat history and contacts, acquiring the same level of visibility and control over the account as the owners, effectively compromising their private communications.

For a more detailed analysis of GhostChat, check out the latest ESET Research blog post, “Love? Actually: Fake dating app used as lure in targeted spyware campaign in Pakistan

New TELUS cross-border study reveals Canadians and Americans want companies to earn their trust in AI

Posted in Commentary with tags on January 28, 2026 by itnerd

‘Include our feedback as you build AI’ is the key message from American and Canadian respondents polled in TELUS’ latest cross-border study, AI Trust Atlas: Public perspectives on bridging the AI trust gap. With 85% of Canadians and 89% of Americans reporting that they are using AI, familiarity with the technology is growing – and so are calls for inclusion and engagement in how AI is designed and deployed.

The report captures perspectives from more than 11,000 Canadians and Americans, with special attention to historically underrepresented* communities, highlighting the importance of including a wide range of voices to build trustworthy AI. In strong majorities, respondents shared that their trust in companies that use AI is stronger when organizations review potential harms before release, explain AI use in plain language and actively listen to customer input on how AI is deployed.

Charting a course to trust in AI

Survey participants laid out actions companies that deploy AI technology can take to earn their confidence:

  • 69% of Canadians and 72% of Americans want companies to actively seek and listen to customer input before deploying AI
  • 76% of Canadians and 77% of Americans would trust companies more if they reviewed AI systems for potential harms before launching new tools
  • 73% of Canadians and 74% of Americans want companies to explain how they use AI in easy-to-understand terms
  • 90% in both countries believe AI should be regulated, demonstrating strong support for governance frameworks

Trust in AI is built through collaboration

The report concludes with actionable recommendations for government, industry and academia, providing a clear roadmap for implementation:

  • Strengthen AI literacy through education programs that help people understand and safely use AI
  • Embed diverse perspectives throughout AI development – from conception to deployment – to create more resilient, trusted systems that work equitably for all communities
  • Provide clear explanations and human oversight for critical AI decisions
  • Collaborate across sectors to create ethical standards that keep people safe while encouraging innovation

Global leadership in AI

TELUS established its leadership in human-centric technology, consistently evolving how it innovates to meet the changing needs and expectations of customers and communities:

  • In September 2025, TELUS opened Canada’s first Sovereign AI Factory — a secure, scalable and high-performance AI compute facility to support Canadian businesses and economy, and drive our nation’s AI future
  • In November 2025, the TELUS AI Factory was named Canada’s fastest and most powerful supercomputer by the prestigious TOP500 list, ranking 78th among the world’s 500 most powerful computing systems
  • TELUS’ generative AI (GenAI) customer support tool made history by becoming the first in the world to be internationally certified in Privacy by Design (ISO 31700-1)
  • It was the first telecom to sign a voluntary AI code of conduct introduced by the Canadian federal government, and has won several international awards for its work, including the Responsible AI Institute’s Outstanding Organization prize
  • TELUS participates in many international forums including speaking on UN AI for Good panels, NIST’s U.S. AI Safety Institute Consortium, and participating in the G7 Business delegation, while collaborating strategically with leading AI research institutes including Mila – Quebec Artificial Intelligence Institute, the Vector Institute and Alberta Machine Learning Institute (AMII)
  • TELUS was one of the first contributors to the Hiroshima AI Process Transparency Report and was featured as a case study in the Business at OECD report on AI skills and productivity
  • It also partnered with Indigenomics to launch IndigenomicsAI with TELUS’ Sovereign AI Factory to advance Indigenous economic growth

By prioritizing trust, TELUS aims to create a future where everyone can confidently embrace the benefits of technology. To read the full report, visit telus.com/ResponsibleAI.

Why Aren’t Apple And Google Acting To Remove Grok And X From Their App Stores?

Posted in Commentary with tags , , , on January 28, 2026 by itnerd

I have to wonder where are the backbones of Tim Cook and Sundar Pichai are. I say that because it has been weeks since the whole Grok allowing users to create objectionable content thing blew up. To recap:

To the last point, the EU is one of a number of governments who are up in arms about this. And rightfully so. Elon Musk has simply gone too far and he needs to be punished for his actions. And the best way to punish him is to pull his apps from the Apple App Store and from the Google Play Store. But that hasn’t happened and you have to wonder why. Is it because Apple and Google don’t want to pick a fight with Elon? Is it because Tim Cook and Sundar Pichai are cowards? Is it about the money that these companies make from their cut of the subscriptions to Grok and X? Who knows?

But I do know this. Section 1.1.4 of Apple’s review rules prohibit the sort of thing that Grok and X are doing at the moment. Ditto for Google Play. Given that, why aren’t these companies enforcing their own rules?

The fact is it’s beyond time for Apple and Google to stand up, grow a pair, and throw Elon’s apps off their respective app stores. Along with any other app that does this sort of thing. Because by not doing so, they are burning the trust that they have with consumers that their apps stores are safe places to get apps from down to the ground. Along with that, it also sends the message that rules are rules, except when they are not.

Apple and Google, you both need to do better. Now.

Pentesting Pulse Report Reveals Widening Satisfaction Gap as Security Leaders Race to Secure AI at the Speed of Business

Posted in Commentary with tags on January 28, 2026 by itnerd

Cobalt has today released a new Pentesting Pulse Report, which exposes a growing disconnect in the security testing market. While penetration testing remains essential for both compliance and defense validation, satisfaction with traditional pentesting vendors is alarmingly low. According to the survey of 150 senior security leaders, a mere 36% report being fully satisfied with their current pentesting provider.

Key Findings:

  • Only 36% of respondents are fully satisfied with their current pentesting vendor.
  • 76% cite staying ahead of threats and vulnerabilities as a high-priority security goal.
  • 50% identify securing AI adoption as a key strategic focus.
  • 40% are motivated to switch vendors for higher quality testing, while 37% cite the need for AI-specific pentesting expertise.
  • Operational friction remains high, with vendor rotation (28%) and lack of pentester expertise (23%) cited as top challenges.
  • 35% say the ability to schedule testing in days, not weeks, would motivate them to change providers.

To read the Pentesting Pulse Report, click here

Guest Post – Think Before You Scan: That QR Code May Be a Scam

Posted in Commentary with tags on January 28, 2026 by itnerd

At the start of January, the US Federal Bureau of Investigation (FBI) issued a warning against cyber attacks organised by North Korean cybercriminals who used fake QR codes to trick users into obtaining personal information. According to cybersecurity experts, similar attacks, also known as “quishing”, are on the rise not only in the US but in other countries, as cybercriminals look for new ways to profit.

Quishg (QR code phishing) is a phishing technique where cybercriminals try to trick users into scanning QR codes that lead to malicious websites. Organisations in several countries have issued warnings that bad actors place these QR codes on top of legitimate ones in public places such as kiosks, restaurants, or parking meters.

For example, last year, UK government institutions have warned users of fake QR stickers on parking machines, with victims being sent to spoofed payment pages. Meanwhile, the US Federal Trade Commission issued a similar warning about unexpected packages containing QR codes that led to phishing websites.

Such fake QR codes can also be shared online. For example, the FBI said that a North Korean state-sponsored cybercriminal group, called Kimusky, targeted employees of organizations by embedding malicious QR codes in an email. In one such instance, a QR code was presented as a way to download additional information.

According to cybersecurity experts at Planet VPN, a free virtual private network (VPN) provider, no matter where a fake QR code is placed, the scheme is similar. After scanning it, a user is often forwarded to a fake phishing website mimicking a legitimate one, such as a restaurant’s website, where cybercriminals may try to charge a user’s credit card.

According to Konstantin Levinzon, co-founder of Planet VPN, such scams can lead not only to financial losses but also to compromised devices.

“Quishing is phishing–just in a different wrapper. A QR code can lower people’s guard because this technology became ubiquitous only during the pandemic, and the threat still isn’t as widely recognized. It also shifts the “risky click” from a visible link to a quick scan, making the danger easier to miss. Attackers are refining these tactics every year and constantly finding new ways to trick users,” he says.

According to Levinzon, one reason why cybercriminals may favour QR codes in emails instead of regular phishing emails is that QR codes often bypass anti-phishing and scam filters, because these often analyze only text and links, but don’t analyze images.

And even if anti-spam filters in emails are equipped with QR code detection, cybercriminals often find new ways to bypass them, for example, by making QR codes in different colors.

Cybersecurity researchers at Proofpoint estimate that during the first half of last year, there were 4,2 million QR code-related threats. However, Levinzon says that the number is likely higher because many QR code scams are undetected.

When it comes to protecting against the growing threat, users are advised to be more deliberate about when and why they scan a QR code. If after scanning a QR code, a person is forwarded to a website that asks for payment or log-in details, this is a real warning sign.

Meanwhile, if a QR code is sent from an unknown sender via email, Levinzon advises contacting the sender directly before entering login credentials or downloading files.

“We recommend applying the same logic everywhere: stay skeptical whether you receive a message from a coworker or on your personal social media account. However, vigilance is only part of the story. To maximize security, users also need basic safeguards – use a VPN on public Wi-Fi, install updates promptly, use strong passwords, and enable multi-factor authentication on all accounts,” he says.

CFOs Set New Bar for Finance AI: Show Your Work and Know When to Stop

Posted in Commentary with tags on January 28, 2026 by itnerd

The debate is over. CFOs aren’t asking whether to adopt AI in finance anymore. They’re asking why every solution forces them to choose between speed they can’t audit and control that doesn’t scale.

A new research study from Wakefield Research surveyed 100 CFOs at mid-market U.S. companies ($50M-$500M revenue). Between 60 and 77 percent already plan to adopt AI depending on the use case. But the findings reveal a massive trust gap blocking execution.

The trust gap is real. 96% of CFOs say AI’s biggest benefit is freeing time for strategic work. But only 14% completely trust AI to deliver accurate accounting data on its own. And 97% say human oversight is critical. That’s not a contradiction – it’s CFOs defining the solution.

The findings reveal a market stuck between two broken models. AI copilots – whether standalone or embedded in legacy tools – still require accountants to review transaction by transaction, delivering single-digit productivity gains. AI agents – black-box LLM wrappers with finance branding – promise full automation but deliver unacceptable risk: no way to verify accuracy, no real audit trail, and low understanding of business context.

CFOs want neither babysitting nor black boxes. They want what they are calling “intelligent escalation” – AI that operates autonomously on routine transactions but knows when it’s encountering ambiguity and escalates with full context. One CFO put it simply: “We need an autopilot – fast, accurate and with the sound judgment of our most reliable accountant.”

The bottleneck isn’t AI intelligence – it’s AI judgment. As foundation models get smarter, the differentiator isn’t raw capability – it’s understanding business context, company policies, and when a decision requires human input. Speed and accuracy are table stakes. Judgment is what separates automation from intelligent escalation.

The study makes clear what finance leaders demand: speed, verifiable accuracy, full audit trails, and intelligent escalation – AI that earns the right to operate autonomously by demonstrating judgment about when to act and when to ask.

CFOs have drawn the line: AI that can’t show its work and doesn’t know when to escalate is unacceptable in finance.

Read the full report from Maximor here: Finance AI Adoption Benchmarking Report.

MIND Announces Autonomous DLP for Agentic AI

Posted in Commentary with tags on January 28, 2026 by itnerd

Enterprises are moving quickly to adopt agentic AI to drive real business outcomes, including faster decision-making, increased productivity and new operational efficiencies. But as AI systems become more autonomous, those outcomes depend on one critical factor: whether organizations can trust how their data is accessed, used and controlled.

Today, MIND announced DLP for Agentic AI, a data-centric approach to AI security designed to help organizations safely achieve the business value of agentic AI by ensuring sensitive data and AI systems interact safely and responsibly.

Agentic AI can autonomously create, access, transform and share data across SaaS applications, local devices, homegrown systems and third-party tools. While this unlocks meaningful gains in speed and scale, it also introduces new risks. Without clear visibility and controls, data security gaps can undermine AI initiatives, slow adoption and put business outcomes at risk.

Data Security as the Foundation for AI Outcomes

As organizations evaluate how to secure agentic AI, new security categories are appearing. However, most of these emerging approaches fail to secure the critical foundation that Agentic AI relies on: the data itself.

MIND’s DLP for Agentic AI starts with the belief that business outcomes depend on whether AI systems have the right access to the right data at any point in time. Instead of securing models or reacting to outputs, MIND ensures sensitive data is understood, governed and protected before any AI agent can access or act on it.

With this data-centric approach, organizations can:

  • Identify which AI agents are active across the enterprise and on endpoints, including embedded SaaS capabilities, homegrown agents and third-party tools
  • Detect risky data access by AI agents, monitor behavior in real time and autonomously alert and remediate issues as they emerge
  • Apply the right controls so data and agentic AI interact safely, without slowing productivity or innovation

By putting data security and controls at the center of AI adoption, MIND helps organizations turn AI potential into measurable business results with the right guardrails.

Customers are already using MIND to support enterprise AI initiatives and the secure use of GenAI while maintaining strong data security.

Built for an Agentic AI World

Traditional DLP programs were designed for predictable, human-driven workflows. Agentic AI operates differently, moving at AI speed and acting autonomously. MIND’s DLP for Agentic AI brings context-aware automation to data security, helping teams prevent risk before it impacts the business.

As organizations continue to invest in agentic AI, MIND positions data security and controls as the missing piece required to achieve AI-driven outcomes safely and sustainably.

To learn more about DLP at AI speed and how MIND enables secure, outcome-driven AI adoption, visit mind.io.

New Sumo Logic Security Operations Report Finds Two-Thirds of Security Leaders Lack Integrated Security Tooling

Posted in Commentary with tags on January 28, 2026 by itnerd

Sumo Logic today released its 2026 Security Operations Insights report, which found that security is complicated by a growing number of cloud tools, sprawling tech stacks and a lack of communication that leads to less reliability for security teams.

Security is becoming increasingly complex for enterprise organizations, as application environments are changing rapidly. AI hype has created a rush to develop and adopt AI tools while broadening the attack surface and forcing organizations to reconsider whether their security solutions are actually providing value.

The Sumo Logic 2026 Security Operations Insights report surveyed more than 500 IT and security leaders and was developed with independent research firm UserEvidence. Key findings include:

  • 90% of security operations leaders say supporting data sources from multi-cloud and hybrid-cloud environments is very or extremely important for their SIEM, highlighting the continued need for data pipeline management.
  • Only 51% say their current SIEM is very effective at reducing mean time to detect and respond to threats. And just 52% are very confident their current SIEM can scale to meet future security and cloud operations needs.
  • 90% of security leaders say AI/ML is extremely or very valuable in reducing alert fatigue and improving detection accuracy. Yet their most common AI use cases focus on basic tasks like threat detection. These findings indicate that AI adoption isn’t as widespread through advanced security workflows as marketing narratives often suggest.
  • 93% of enterprise organizations use at least three security operations tools, and 45% use six or more. It’s no surprise that over half (55%) of respondents report having too many point solutions in their security stack.
  • 80% of enterprise organizations say security and DevOps use shared observability tools, but only 45% say the two teams are very aligned on tooling and workflows. 100% say a unified platform for logs, metrics, and traces would be valuable for their security and DevOps teams.
  • 70% of respondents say they’ve fully or mostly automated their threat detection and response process, with 25% reporting it’s fully automated. Those who rely on a mostly or fully manual process are in the extreme minority.

These findings underscore that enterprise security leaders are overwhelmed. As AI continues to complicate the threat landscape, it adds yet another technology that needs to be monitored, secured, and used in security. The solution isn’t a larger security tech stack with more siloed tools. Instead, it’s a unified platform that acts as a single source of truth for DevSecOps, providing real-time insights and visibility across the entire environment.

Resources