Unit 42 Research: Fully Autonomous AI Attacks Closer Than Ever

Posted in Commentary with tags on April 23, 2026 by itnerd

Palo Alto Networks has shared new research regarding how effective autonomous AI offensive capabilities are against cloud environments. While Unit 42 did not use frontier AI models in testing, this research is a crucial look at how powerful AI models may ultimately be weaponized in cyberspace.

Building on the November 2025 Anthropic disclosure that showed AI acting as the operator in an espionage campaign, Unit 42 answers the question: Can AI systems operate autonomously end-to-end to attack cloud environments, or do they still require human guidance?

Unit 42’s research & findings include:

  • Unit 42 created “Zealot,” a multi-agent penetration testing proof-of-concept designed to see if AI could independently take down a hardened cloud environment without any human oversight.
  • In sandboxed GCP tests, the multi-agent system autonomously executed a full attack chain, including: Server-Side Request Forgery (SSRF) exploitation, Metadata service credential theft, service account impersonation and privilege escalation and BigQuery data exfiltration.
  • AI-driven attacks have reached functional maturity and current LLMs can chain attacks with minimal human guidance. The window between initial access and data loss is shrinking as tools like Zealot leverage misconfigurations faster and more consistently than a human attacker. 
  • However, creating a purely autonomous multi-agent cyber attack was not entirely possible (manual oversight was needed to prevent the AI from irrelevant rabbit-holing).
  • Current security detection models optimized for human attack patterns will struggle to catch agent-based operations that chain actions across services in seconds.

You can read the research here: https://unit42.paloaltonetworks.com/autonomous-ai-cloud-attacks/

Check Point Software Earns 2026 Technology Innovation Leadership Recognition for Prevention‑first WAF and API Security

Posted in Commentary with tags on April 23, 2026 by itnerd

Check Point today announced it has been honored with Frost & Sullivan’s 2026 Technology Innovation Leadership recognition for its advancements in web application and API protection (WAAP). The new recognition illustrates how Check Point’s prevention-first strategy and open-source contributions have established a new benchmark for securing modern digital architectures.

Check Point WAF is purpose-built to protect modern, cloud-native and AI-powered applications in real time. As applications grow more dynamic, organizations need security that prevents threats before they impact the business, helping customers move forward with confidence while reinforcing Check Point’s leadership in the future of cyber security.

Frost & Sullivan highlights that as enterprises accelerate adoption of cloud-native architectures, APIs, and AI-driven applications, the attack surface has expanded well beyond traditional security tools. Check Point’s Cloud Security Report reinforces this urgency, finding that 65% of organizations have experienced cloud-related breaches. Frost & Sullivan recognizes Check Point for solving these challenges head-on, with its WAF and API security platform emerging as an alternative to legacy solutions that struggle to defend against today’s sophisticated attacks.

The report highlights several key strengths of Check Point WAF, primarily focusing on its advanced AI capabilities, unified platform approach, and operational efficiency:

  • Advanced Dual-Layer AI Engine: Delivers close to 100% threat detection with fewer than (<1%) false positives, preemptively blocking all attack types, including zero-days without the need for emergency patching, giving security teams high-confidence protection
  • Unified Application Security Across the Full Attack Surface: Consolidates WAF, API, GenAI, bot, DDoS, file security, and CDN capabilities, eliminating the fragmented point solutions that create blind spots and increase administrative overhead
  • Operational Efficiency and Automation: Eliminates manual rule creation and signature updates via self-learning Al, continuously adapting to application changes, reducing false positives, emergency patching cycles, and operational lifts
  • A Community-Driven Model That Accelerates Innovation: Commitment to transparency and collective intelligence enables a community-driven approach to threat hardening, accelerating updates for emerging threats and techniques

The results speak for themselves: <1% false positives, automatic prevention of zero-day threats without emergency updates, and incident response times are measured in hours rather than days. Security and application teams see significant reductions in rule management overhead, while end users benefit from improved application availability and reliability. As Frost & Sullivan noted, “by converting continuous learning and runtime observability into instant, customized threat prevention with limited human intervention, Check Point WAF sets a new benchmark for what organizations should expect from a web application firewall in the cloud native and AI era.”
 
To learn more about this recognition, visit the Check Point blog or access the full Frost & Sullivan report here.

Sage launches Sage HCM 

Posted in Commentary with tags on April 23, 2026 by itnerd

Sage today announced Sage HCM, a new human capital management solution for mid-market organizations in North America.

Integrated with Sage Intacct, Sage HCM connects HR, payroll and workforce data with financial management to give organizations clearer visibility and control over workforce costs, often their largest and most dynamic expense, while improving payroll accuracy and supporting better workforce planning.

Launching with Sage HCM are industry capabilities, such as Sage HCM for Construction, designed to help firms connect labour, payroll and job costing in a single system, alongside a new HCM Agent that uses AI-powered automation to streamline HR and payroll workflows.

Sage will showcase Sage HCM at Sage Future, taking place in San Francisco from April 28-30.

Bringing workforce costs into clearer financial view

For many organizations, workforce data still sits in separate HR, payroll and finance systems, making it harder for HR leaders, payroll teams and CFOs to understand workforce costs and performance. According to Deloitte’s Human Capital Trends research, nearly three quarters (74%) of organizations say improving how workforce data supports decision-making is now a critical priority.

Sage HCM is designed to address this challenge by bringing core HR, payroll, time and talent workflows together in one connected system. Built for mid-market organizations with more complex requirements, including multi-entity operations and multi-jurisdiction payroll, it helps customers gain a clearer view of workforce costs and their impact on business performance.

Designed for industry needs, including construction

As part of this launch, Sage HCM for Construction extends these capabilities for firms managing complex labour and project-based operations. It includes support for union rules, certified payroll and prevailing wage requirements, while linking labour data directly to project financials.

AI-powered support for HR and payroll workflows

Sage HCM launches with the HCM Agent, Sage’s first AI agent dedicated to HR and payroll workflows. It helps organizations streamline tasks such as payroll preparation, validation and reconciliation, while highlighting potential compliance risks and reducing manual effort.

The HCM Agent is designed to improve efficiency and accuracy without reducing oversight, helping teams automate routine work while maintaining control over critical processes such as payroll and compliance.

New opportunities for partners

The launch of Sage HCM also creates new opportunities for Sage partners to support customers with workforce management, payroll and compliance alongside finance transformation. Partners can deliver Sage HCM together with Sage Intacct and industry-specific solutions, helping customers bring workforce and financial data together in a more connected system.

To get hands-on experience with Sage HCM, sign up to attend Sage Future here: Sage Future

Hisense Unveils UR9 Series

Posted in Commentary with tags on April 23, 2026 by itnerd

Hisense today announced the global launch of its latest premium television lineup, the UR9 Series.

More than a product launch, UR9 represents a new interpretation of “Natural and Real Colour” — one that is not only more vivid, but also more natural, comfortable and true to life for everyday viewing. At the heart of UR9 is a breakthrough in how colour is created and experienced. By generating colour directly at the light source, UR9 delivers richer tones, more accurate details and a viewing experience that feels closer to how the human eye perceives the real world — reducing visual fatigue while enhancing immersion.

This leap in user-centric experience comes from Hisense’s industry-leading RGB Mini-LED technology. In March 2026, the Consumer Technology Association (CTA) Video Division Board released the official industry definition for RGB LED TVs. As a CTA member, Hisense is a major force in establishing the global industry standard of what real RGB Mini-LED is, and continues to push the boundaries of conventional display technology.

At the forefront of this innovation, the Hisense UR9 Series represents a quantum leap in display engineering. Moving beyond traditional Mini-LED, it introduces a full RGB Mini-LED backlight system, where each LED integrates independent red, green and blue diodes — enabling unprecedented control over colour, brightness and contrast. This architecture achieves up to 100 per cent of BT.2020 colour gamut, delivering colour performance that is not only more expansive, but also more precise and lifelike.

Powering this breakthrough is the all-new Hi-View AI Engine RGB processor. Through real-time coordination of colour and brightness at the zone level, the processor ensures every frame is dynamically optimized — bringing greater depth, clarity and balance to each scene.

Beyond visual excellence, the UR9 is engineered as a complete sensory experience. Its integrated 4.1.2 Multi-Channel Surround Sound system, professionally tuned by Devialet, creates a fully immersive 360-degree soundscape. Featuring up-firing height speakers, dedicated surround channels and a powerful built-in subwoofer, the system delivers cinematic depth, dynamic range and spatial realism without the need for external equipment.

To ensure consistently comfortable viewing across environments, UR9 also delivers up to three-times deeper blacks and higher contrast, allowing for clearer details even in bright daylight. The Obsidian Panel reduces reflections to just 1.5–1.8 per cent by absorbing ambient light. For gaming enthusiasts, a native 180Hz refresh rate on selected models delivers ultra-smooth motion and responsiveness, enhancing next-generation gaming experiences to new heights.

Supporting a comprehensive suite of premium formats — including Dolby Vision IQ, IMAX Enhanced, and Filmmaker Mode — the UR9 is built to meet the evolving demands of both content creators and consumers. Its performance is complemented by a refined Pure Elegance Design, featuring a premium metal stand and a near bezel-less finish that seamlessly integrates into any modern living space.

With the launch of the UR9, Hisense brings its vision of Innovating A Brighter Life into sharper focus. By redefining “Natural and Real Colour” through true RGB Mini-LED, Hisense is not only advancing display technology, but shaping a viewing experience that is more natural, more immersive and more human-centric.

For more information, please visit hisense-canada.com.

ESET Research: New NGate hides in NFC payment app and possibly built with AI

Posted in Commentary with tags on April 23, 2026 by itnerd

ESET Research has discovered a new variant of the NGate malware family that abuses a legitimate Android application called HandyPay, instead of the previously leveraged NFCGate tool. The threat actors took the app, which is used to relay NFC data, and patched it with malicious code that appears to have been AI generated. As with previous iterations of NGate, the malicious code allows the attackers to transfer NFC data from the victim’s payment card to their own device and use them for contactless ATM cash-outs and unauthorized payments. Additionally, the code can capture the victims’ payment card PINs and exfiltrate them to the operators’ C&C server. The primary targets of this are users in Brazil; however, NFC-based attacks are expanding into new regions.

The malicious code used to trojanize HandyPay shows signs of having been produced with the help of GenAI tools. Specifically, the malware logs contain an emoji typical of AI-generated text, suggesting that LLMs were involved in generating or modifying the code, although definitive proof remains elusive. This fits a broader trend in which GenAI lowers the barrier to entry for cybercriminals, enabling threat actors with limited technical skill to produce workable malware.

ESET Research believes that the campaign distributing the trojanized HandyPay began around November 2025 and remains active. It should also be noted that the maliciously patched version of HandyPay has never been available on the official Google Play store. As an App Defense Alliance partner, we shared our findings with Google. ESET also reached out to the HandyPay developers to alert them about the malicious use of their application. 

As the number of NFC threats keeps rising, so too has the ecosystem supporting them become more robust. The first NGate attacks employed the open-source NFCGate tool to facilitate the transfer of NFC data. Since then, several malware-as-a-service (MaaS) offerings with similar functionality have become available for purchase. However, in this campaign the threat actors decided to go with their own solution and maliciously patched an existing app – HandyPay.

The first new NGate sample is distributed through a website that impersonates Rio de Prêmios, a lottery run by the Rio de Janeiro state lottery organization (Loterj). The second NGate sample is distributed via a fake Google Play web page as an app named Proteção Cartão (machine translation: Card Protection). Both sites were hosted on the same domain, strongly implying a single threat actor. The malware abuses the HandyPay service to forward NFC card data to an attacker-controlled device. Apart from relaying NFC data, the malicious code also steals payment card PINs, enabling the threat actor to use the victim’s payment card data to withdraw cash from ATMs.

For a more detailed analysis of the new NGate variant, check out the latest ESET Research blog post, “New NGate variant hides in a trojanized NFC payment app,” on WeLiveSecurity.com. 

Namastex.ai npm Packages Hit with TeamPCP-Style CanisterWorm Malware

Posted in Commentary with tags on April 22, 2026 by itnerd

Researchers have uncovered malicious Namastex.ai npm packages with the tradecraft of TeamPCP/LiteLLM style CanisterWorm malware, including install-time execution, credential theft from developer environments, off-host exfiltration, canister-backed infrastructure, and self-propagation logic intended to compromise additional packages.

More details here: https://socket.dev/blog/namastex-npm-packages-compromised-canisterworm

Dan Moore, Sr. Director CIAM Strategy at cybersecurity company FusionAuth, commented:

“This newest supply chain threat in the npm ecosystem demonstrates that a lot of the time, the issue isn’t an organizations’ code, but their credentials. Long-lived, over-permissioned CI/CD tokens are as risky as passwords written on a sticky note. Organizations need to have more than credentials for software systems. In order to maintain identity hygiene, organizations should rotate, scope, and continually monitor credentials.”

AI for coding is great. But you have to be incredibly careful to make sure that the benefit of being able to code more efficiently isn’t overshadowed by having threat actors set up shop by infecting your code.

Lovable access issue exposes project data, credentials in AI-generated coding

Posted in Commentary with tags on April 22, 2026 by itnerd

A security issue involving AI coding platform Lovable allowed users to access other users’ project data, including source code, database credentials, AI chat histories, and customer data, according to reports and user disclosures.

The issue was publicly highlighted after a user demonstrated that a free account could access data across projects created before November 2025.

Lovable initially stated there was no data breach, describing the behavior as expected for public projects, but later acknowledged a backend error that temporarily enabled access to AI chat data. The company updated its visibility and permission settings following the incident and said the issue had been addressed.

The incident involved exposure of project-level data within the platform environment and did not include confirmation of broader system compromise. Reporting indicates the issue remained unresolved for a period of time after being reported before changes were implemented.

Ryan McCurdy, VP of Marketing, Liquibase had this to say:

   “This incident is a reminder that the risk in AI-generated development is not just bad code. It is bad control design. When application creation speeds up, permissions, secrets exposure, and database access paths can become part of the attack surface just as quickly. If teams do not put governed change, least-privilege access, and clear separation between public artifacts and sensitive backend context in place, AI can amplify operational risk faster than traditional review processes can catch it.”

John Carberry, Solution Sleuth, Xcape, Inc. adds this comment:

   “The Lovable data exposure incident highlights a catastrophic failure in the fundamental security architecture of AI-powered “vibe coding” platforms. By failing to implement basic ownership validation on API endpoints, a textbook Broken Object Level Authorization (BOLA) flaw. Lovable allowed any user to traverse project IDs and scrape the source code, database credentials, and AI chat histories of others.

   “For security leaders, the primary risk is a silent supply chain compromise: while Lovable claims no “breach” of its own servers, the exposure of third-party secrets like Stripe and Supabase keys means the applications built on the platform are now effectively backdoored.

   “Technically, the crisis was compounded by a February 2026 backend regression that re-opened access to sensitive chats and a response cycle that spent 48 days ignoring a bug bounty report. Organizations must treat AI-generated code with extreme caution, ensuring that “vibe coding” speed doesn’t bypass mandatory secret scanning, environment variable isolation, and the hard-won security logic of the last twenty years.

   “Lovable proved that while AI can write your code, it can’t write your common sense, especially when “public by default” includes your Stripe secret keys.”

Hannah Perez, Director of Marketing, Suzu Labs followed up with this:

   “As we move toward AI-generated software, the ‘shared responsibility model’ is becoming dangerously blurred. Users expected a private sandbox for innovation, but instead found a communal space with paper thin walls.

   “Lovable’s eventual pivot is welcome, but the delay between the initial report and the actual fix suggests that AI startups are currently outpacing their own security protocols, which is as expected for most. In the rush to ‘vibe code,’ fundamental safety is being treated as a post-launch patch rather than a requirement. For this industry to mature, Secure by Default must be the non-negotiable standard for any platform handling sensitive IP and source code.”

Vishal Agarwal, CTO, Averlon provided this comment:

   “It’s one thing to have access to the sauce. It’s another to have access to its recipe. With inadvertent leakage of chat history, attackers gain access to reconnaissance information that can be leveraged to target the organization more precisely.

   “What makes sophisticated attackers dangerous isn’t just their technical capability, it’s their detailed understanding of the target’s systems. Exposing chat history and source code together hands that understanding directly to an attacker.”

This highlights the fact that AI has to be part of your security planning. Otherwise really bad things will happen. And this is a case in point.

Malicious Trading Site Drops “Needle Stealer” to Harvest Browser Data

Posted in Commentary with tags on April 22, 2026 by itnerd

Researchers have uncovered a new attack campaign using a previous malware loader to deliver a different threat: Needle Stealer, a data-stealing malware designed to quietly harvest sensitive information from infected devices, including browser data, login sessions, and cryptocurrency wallets. This time, attackers use a website promoting a tool called TradingClaw (tradingclaw[.]pro), which claims to be an AI-powered assistant for TradingView, a legitimate platform used by traders to analyze financial markets. The fake TradingClaw site is not part of TradingView, nor is it related to the legitimate startup tradingclaw[.]chat. Instead, it’s being used here as a lure to trick people into downloading malware.

More details can be found here: https://www.malwarebytes.com/blog/threat-intel/2026/04/malicious-trading-website-drop-malware-that-hands-over-your-browser-to-attackers  

Ensar Seker, CISO at SOCRadar, commented:

“This campaign reflects a growing shift where threat actors weaponize trust in legitimate platforms like TradingView by building highly convincing AI-themed lures around them. The use of “AI trading assistants” is particularly effective because it targets both curiosity and financial motivation, lowering user skepticism. What stands out here is the reuse of a known loader to deploy a different payload, which shows how modular and scalable modern malware operations have become.

More importantly, the focus on harvesting browser sessions and crypto wallets signals that attackers are prioritizing immediate monetization over persistence. Once session tokens are stolen, MFA becomes irrelevant, and accounts can be hijacked in real time. Organizations and individuals need to treat any third-party tool claiming integration with financial platforms as high risk unless it is directly verified.

This is not just malware delivery, it is identity compromise at scale disguised as innovation.”

This is scary as this is a big jump in terms of what threat actors can do. Thus you really need to by hyper aware to threats as they can come from anywhere and pop up in the most unexpected places.

Guest Post: Mythos access by Discord group reveals real danger of AI-powered hacking

Posted in Commentary with tags on April 22, 2026 by itnerd

By Stefanie Schappert

A Discord group’s unauthorized access to Anthropic AI’s powerful Mythos model on Tuesday is doing more than raising questions about the guardrails around powerful AI cybersecurity tools.

It’s exposing a bigger problem for the cybersecurity industry: AI can now find flaws and exploit them so quickly that defenders may be the ones left truly exposed.

A group of AI-fueled Discord info-seekers – one of them linked to a third-party vendor of the AI startup – managed to access the highly gatekept cybersecurity defense system in February, the same day of its debut. 

Using a mixed bag of insider access, web-scouring bots, and some raw ingenuity, the breach is triggering a fresh wave of alarm across an already spooked industry.

Ironically, as the Discord incident was unfolding, the Cloud Security Alliance – in a rapid-response briefing published days after Mythos was unveiled – warned that AI was accelerating vulnerability discovery faster than organizations could keep up, creating the perfect storm for defenders.

Finding thousands of flaws and zero days across hundreds of software systems, the introduction of Mythos has effectively shrunk the patch window defenders have relied on for years – from days to just a few hours.

If released in the wild and adopted by hackers, security teams will inevitably be tasked with building an entirely new playbook to help decide how to prioritize and fix what matters – and there’s still no guarantee they can stem the cyber bleeding. 

More than 250 security leaders helped shape the briefing, which argues the challenge is no longer just finding flaws, but deciding which ones actually pose real risk – and fixing them before they can be turned into working exploits.

It’s a shift some security experts say the industry is still underestimating. The problem is no longer discovery alone. It is remediation, accountability, and whether defenders can keep up as AI moves from identifying vulnerabilities to showing how they can be exploited in the real world.

The Mythos moment may ultimately be less about a single powerful cybersecurity model and more about what happens in the shrinking window between finding a flaw and weaponizing it.

Anthropic’s answer, for now, is Project Glasswing – a tightly controlled effort to use Mythos to help secure critical software before comparable models become more widely available.

But even that highlights the larger issue at hand: the industry knows what is coming and is still scrambling to build that much-needed playbook in time to defend against larger threats, such as nation-state or ransomware attackers.

If a group of AI nerds could get into Mythos – allegedly without malicious intent – imagine the fallout if the next ones to slide through that door were actual criminals.

ABOUT THE EXPERT

Stefanie Schappert, a senior journalist at Cybernews, is an accomplished writer with an M.S. in cybersecurity, immersed in the security world since 2019.  She has a decade-plus experience in America’s #1 news market working for Fox News, Gannett, Blaze Media, Verizon Fios1, and NY1 News.  With a strong focus on national security, data breaches, trending threats, hacker groups, global issues, and women in tech, she is also a commentator for live panels, podcasts, radio, and TV. Earned the ISC2 Certified in Cybersecurity (CC) certification as part of the initial CC pilot program, participated in numerous Capture-the-Flag (CTF) competitions, and took 3rd place in Temple University’s International Social Engineering Pen Testing Competition, sponsored by Google.  Member of Women’s Society of Cyberjutsu (WSC), Upsilon Pi Epsilon (UPE) International Honor Society for Computing and Information Disciplines.

OVHcloud and Alchemy enter strategic relationship 

Posted in Commentary with tags on April 22, 2026 by itnerd

 OVHcloudand Alchemy today announced a strategic relationship. Together, the two companies will enable decentralized app and chain developers to benefit from Alchemy’s powerful suite of tools and Supernodes, Alchemy’s blockchain engine, on the secure, de-centralized and high-performance foundation of OVHcloud’s cloud infrastructure.

The strategic relationship has already started to have an impact. The performance-price ratio offered by OVHcloud has enabled Alchemy to scale to new regions ahead of schedule, even in highly regulated markets, helping developers around the world to launch decentralized apps and chains faster. The OVHcloud platform seamlessly interconnects with Alchemy’s existing cloud infrastructure, including hyperscale offerings, giving Alchemy a truly multi-cloud environment. 

Earlier this year, Alchemy supported OVHcloud’s blockchain startup accelerator, helping to build an ecosystem where startups, enterprises, and partners co-innovated and worked to deliver the next generation of blockchain services at global scale.