Attackers weaponizing VS Code and Cursor tasks to silently infect developer systems

Posted in Commentary with tags on March 9, 2026 by itnerd

Researchers from Abstracts ASTRO research team have uncovered new developments in the evolving “Contagious Interview” campaign, showing how attackers are increasingly abusing developer tools—including Visual Studio Code and the AI coding editor Cursor AI Code Editor—to silently execute malware on developer machines. Here is the blog post: Contagious Interview: Evolution of VS Code and Cursor Tasks Infection Chains Part 2.

ASTRO analysts detail how attackers are embedding malicious commands into IDE task configuration files. When a developer opens a cloned repository and approves the standard workspace trust prompt, the tasks execute automatically which triggers multi-stage infection chains without requiring the victim to run code manually.

Key findings from the research include:

  • New payload staging infrastructure: Attackers are shifting from previously exposed hosting platforms to GitHub Gists, URL shorteners, and Google Drive to stage malicious scripts and payloads.
  • Developer-focused social engineering: Malicious repositories disguised as interview projects or legitimate development tools execute automatically when opened in an IDE.
  • Multi-stage infection chains: Initial task execution downloads additional loaders and can ultimately deploy infostealers or backdoors targeting browser credentials, crypto wallets, and system data.
  • Evasion tactics: Some payloads are hidden off-screen in configuration files or masquerade as legitimate GPU or driver tooling to avoid detection.

The report also outlines detection opportunities for security teams, including monitoring IDE-spawned shell commands, suspicious use of URL shorteners in configuration files, and unusual process chains involving Node.js and Python runtimes.

Given the growing use of AI-assisted development environments and the trust developers place in their toolchains, researchers warn this technique could become a major new software supply-chain attack vector.

The first blog post about Contagious Interview is here:: https://www.abstract.security/blog/contagious-interview-evolution-of-vscode-and-cursor-tasks-infection-chains.

Russia State Hackers Target Signal & WhatsApp Accounts of Officials & Journalists

Posted in Commentary with tags on March 9, 2026 by itnerd

The Dutch Minister of Defence warns of a cyber campaign linked to Russia that targets accounts on messaging platforms such as Signal and WhatsApp, belonging to government officials, military staff, and journalists.

The Russian campaign is focused on persuading users to divulge their security verification- and pincodes, allowing the hackers to gain access to the users’ Signal or WhatsApp accounts. The most frequently observed method used by the Russian hackers is to masquerade as a Signal Support chatbot in order to induce their targets to divulge their codes. The hackers can then use these codes to take over the user’s account. Another method used by the Russian actors takes advantage of the ‘linked devices’ function within Signal and WhatsApp.

Once an account has been successfully compromised, the hackers can read incoming messages, including messages in the victim’s chat groups. The Russian hackers likely gained access to sensitive information through this campaign.

Ömer Faruk Diken, cybersecurity researcher at SOCRadar:

“Messaging apps such as Signal and WhatsApp are widely used for private and professional communication. Many officials and journalists rely on them because they use end-to-end encryption. However, though encryption protects messages during transmission, it does not prevent attackers from accessing the account itself. If attackers gain control of the account or connect their own device, they can read conversations and collect information from chats and contact lists. For threat actors involved in espionage, this access can provide insight into discussions, contacts, and internal coordination.

“The warning from Dutch officials highlights a cyber campaign that targets messaging accounts used by people who handle sensitive information. By using social engineering and abusing messaging app features, attackers attempt to gain access to private conversations and contacts. Incidents like this also highlight the importance of basic security practices. Users should avoid clicking unknown links, never enter passwords or verification codes on suspicious pages, and always verify the source of requests for sensitive information. Email addresses can also be spoofed, so messages that ask users to click links or provide input should be checked carefully. When possible, organizations should enforce multi-factor authentication to add another layer of protection to communication accounts.

Lydia Atienza, Principal Threat Intelligence Researcher at Outpost24:

“Based on the techniques described in the advisory issued by Dutch intelligence agencies, there is little evidence of particularly novel tradecraft. The methods resemble the same social-engineering tactics long used by financially motivated cybercriminals to compromise messaging accounts. This serves as a reminder that state-linked actors do not always rely on highly sophisticated exploits. In many cases, the same techniques commonly seen in cybercrime can be just as effective in espionage campaigns.”

Additional Resources:

SOCRadar Blog: Russia Targets Signal and WhatsApp Accounts, Dutch Officials Warn

Microsoft Warns Hackers Operationalizing AI to Accelerate Tradecraft 

Posted in Commentary with tags on March 9, 2026 by itnerd

Microsoft has warned that threat actors are operationalizing AI along the cyberattack lifecycle to accelerate tradecraft, abusing both intended model capabilities and jailbreaking techniques to bypass safeguards and perform malicious activity. They’re embedding AI into their workflows to increase the speed, scale, and resilience of cyber operations, with the most malicious use of AI centering on using language models for producing text, code, or media.

Microsoft Threat Intelligence has observed that most malicious use of AI today centers on using language models for producing text, code, or media. Threat actors use generative AI to draft phishing lures, translate content, summarize stolen data, generate or debug malware, and scaffold scripts or infrastructure. For these uses, AI functions as a force multiplier that reduces technical friction and accelerates execution, while human operators retain control over objectives, targeting, and deployment decisions.

This dynamic is especially evident in operations likely focused on revenue generation, where efficiency directly translates to scale and persistence. To illustrate these trends, this blog highlights observations from North Korean remote IT worker activity tracked by Microsoft Threat Intelligence as Jasper Sleet and Coral Sleet (formerly Storm-1877), where AI enables sustained, large‑scale misuse of legitimate access through identity fabrication, social engineering, and long‑term operational persistence at low cost.

More details can be found here: https://www.microsoft.com/en-us/security/blog/2026/03/06/ai-as-tradecraft-how-threat-actors-operationalize-ai/

Ensar Seker, CISO at SOCRadar:

“AI is rapidly becoming embedded across the entire cyberattack lifecycle, but not always in the ways people expect. In many cases, threat actors are not building their own advanced AI models; instead, they are operationalizing existing generative AI tools to accelerate traditional attacker workflows. We are seeing AI used to scale reconnaissance, generate convincing phishing content in multiple languages, automate vulnerability research, and refine social engineering campaigns. The real shift is not sophistication alone, it is the speed and scale at which attackers can now execute tasks that previously required significant manual effort.

“The biggest impact of AI in cyber operations is efficiency rather than completely new attack techniques. Attackers are using AI to shorten the time between reconnaissance and exploitation. For example, AI can help analyze large datasets of leaked credentials, generate exploit scripts, or summarize technical documentation for vulnerabilities. This lowers the barrier to entry for less experienced actors while allowing more advanced groups to increase operational tempo and run campaigns in parallel across multiple targets.

“However, AI does not replace traditional attacker tradecraft or eliminate the need for human expertise. Sophisticated campaigns, especially those conducted by nation-state groups, still rely heavily on manual reconnaissance, custom tooling, and operational security discipline. AI is acting more as a force multiplier than a replacement for established tactics. Threat actors still need access, infrastructure, and a clear objective; AI simply helps them move faster once those elements are in place.

“For defenders, the most important takeaway is that AI-driven attacks will increasingly look more polished, personalized, and scalable. Security teams should expect a rise in high-quality phishing, automated reconnaissance against external assets, and AI-assisted malware development. The response should not be panic about AI itself, but investment in visibility, especially around identity, external attack surface, and threat intelligence, so organizations can detect attacker activity early in the intrusion lifecycle before AI-assisted campaigns gain momentum.”

Martin Jartelius, AI Product Director at Outpost24:

“We are seeing the same trend in our own research. In one recent investigation, we observed a threat actor using ChatGPT to assist with vulnerability research related to potential zero-day exploitation. In this case, the attacker’s operational security was weak enough that their activity left a visible trail, giving us rare insight into how generative AI is being used as a ‘research assistant’ during attack preparation. What this highlights is that AI is increasingly acting as a force multiplier for attackers, accelerating reconnaissance, scripting, and vulnerability analysis while lowering the technical barrier to entry.”

AI can do a lot of cool things. But it can also do a lot of bad things if given the chance to. This illustrates the fact that those who defend against attacks should expect more attacks than ever before. Which is of course a bad thing.

Threat Actors Abuse GitHub Notifications to Deliver Vishing Attacks 

Posted in Commentary with tags on March 9, 2026 by itnerd

The Fortra Intelligence and Research Experts (FIRE) team have uncovered a new phishing tactic that abuses legitimate GitHub notification emails to deliver vishing scams. The research shows how attackers are using trusted infrastructure to get malicious messages into inboxes.

Key findings:

  • Attackers hide vishing lures in GitHub commit messages, which generate legitimate notification emails from noreply@github.com.
  • Researchers say this is the first observed use of GitHub commit messages to distribute vishing scams.
  • Notifications are forwarded through Microsoft 365, helping the messages pass authentication checks and evade filters.
  • The lures impersonate brands such as PayPal and Norton and urge victims to call fake support numbers.

The report is published here: https://www.fortra.com/blog/threat-actors-abuse-github-notifications-to-deliver-vishing-attacks

Mega raises $11.5M to give every SMB an enterprise-grade growth team, without the agency

Posted in Commentary with tags on March 9, 2026 by itnerd

Most small to medium business owners have the same relationship with their marketing agency: they pay for effort and hope it turns into outcomes. It rarely feels like a fair trade. Mega is built to fix that. Today, the company announced an $11.5 million Series A to scale a full-service AI growth engine for SMBs – a platform that replaces traditional agencies with a network of AI agents delivering predictable growth without the overhead.

The Series A funding round was led by Goodwater Capital with participation from Andreessen Horowitz, Atreides, SignalFire and Kearny Jackson. It also includes WNBA stars Diana Taurasi, Breanna Stewart, Kelsey Plum and Nneka Ogwumike. 

The problem is structural. SMBs today are expected to compete in a digital ecosystem built for enterprises, across SEO, paid ads, websites, and emerging AI channels. Agencies are expensive relative to SMB budgets, quality varies wildly, execution is manual, and iteration is slow.  At the same time, AI marketing tools have flooded the market, but most still require business owners to learn and operate complex software. Mega takes a different approach by delivering services via software. Instead of managing tools, customers receive execution and measurable performance.

Mega’s core product is an AI-powered growth engine designed specifically for businesses generating roughly $500,000 to $20 million in revenue. The platform uses a network of specialized AI agents to handle SEO, GEO, paid ads, and website management. From the customer’s perspective, it feels like hiring a high-quality growth team, but it runs as software. The system plans, executes, optimizes, and reports continuously. If a customer signs up and never logs in, their marketing still runs and improves.

Mega’s path to market was unplanned. During Covid, the team was building a video game company. When ChatGPT launched, they began experimenting early, building internal AI tools to accelerate their own growth. Organic traffic increased 100 times. Paid customer acquisition costs dropped by 80 percent. When co-founder Lucas Pellan shared the tools with founder friends, the response was immediate and repeated: can we have that. 

With Mega, approximately 55 percent of the work is fully automated, 35 percent is mostly automated with humans in the loop, and 10 percent is executed end to end by humans. This hybrid structure allows Mega to deliver consistent, scalable performance while maintaining quality control. Every campaign feeds data back into the system, improving creative generation, audience targeting, bidding strategies, and optimization logic across the entire customer base.

Mega’s own trajectory reflects the demand for this model. The company went from zero to $10 million in revenue in 10 months. Customers span home services, law firms, healthcare businesses, ecommerce brands, and software companies. 

In one case, Mega helped a Texas medical spa grow search traffic by 174 times. A personal injury law firm increased search visibility by 243 times and began ranking in the top three for key terms. A D2C health brand drove $120,000 in direct website revenue and surpassed its Amazon marketplace performance without increasing ad spend. On average, Mega helps customers grow 20% faster.

The market is massive and underserved. Tens of thousands of marketing agencies serve SMBs across North America, yet most businesses still struggle with unpredictable lead flow, poor ROI, and no visibility into what is working. As digital channels get more competitive and expensive, the gap keeps widening. AI now makes it possible to close it. 

Looking ahead, Mega plans to expand beyond SEO, ads, and websites into managing the entire revenue generation engine for SMBs, including email, outbound, organic social, lead qualification, sales operations, and reporting. The long-term vision is to provide a fully automated growth infrastructure that allows small and mid-sized businesses to compete with enterprise-grade marketing capability, without enterprise overhead.

CData Expands Connect AI Platform with New Agent Tooling and Enterprise-Grade Security

Posted in Commentary with tags on March 9, 2026 by itnerd

CData Software today announced major enhancements to CData Connect AI at the Gartner Data & Analytics Summit (Booth #308). The updates extend CData’s managed Model Context Protocol (MCP) platform with new capabilities across connectivity, context, and control, the three pillars required to move AI from experimentation to production.

Why AI Stalls Before Production

AI investment is accelerating. “Gartner®¹ says worldwide AI spending will total $2.5 trillion in 2026.” But spending isn’t translating into results. Most generative AI initiatives still stall before reaching production. The bottleneck isn’t model capability, it’s the data infrastructure underneath. Without live connectivity to business systems, semantic intelligence that gives data context to AI, and governance controls that enforce security at scale, AI initiatives fail to deliver business value.

CData’s own State of AI Data Connectivity Report reinforces this reality. Only 6% of organizations are satisfied with their current data infrastructure for AI. More than half still rely on custom-built integrations that can’t scale. And 71% of AI teams spend over a quarter of their implementation time on data integration alone, time spent wiring plumbing instead of building intelligence.

Connect AI: Connectivity, Context, and Control in a Single Platform

CData Connect AI is purpose-built to address the data infrastructure gaps that prevent AI from reaching production. Today’s enhancements extend the platform across all three pillars

Connectivity: Connect Gateway and 350+ Data Sources

Connect AI provides live, read-write access to more than 350 business systems, without replication or data movement. The new Connect Gateway extends this reach to data sources behind the firewall, with support for SAP, SQL Server, and PostgreSQL, and more. The result: AI systems can operate against live data regardless of where it resides.

Context: Expanded Agent Tooling and Toolkits

AI agents need business-aware context to choose the right actions and avoid unnecessary MCP tool calls. But exposing too much context creates new risks: increased token usage, model confusion, and unintended access to sensitive data or operations. Connect AI addresses this challenge with a scoped MCP architecture that precisely controls what each agent can see and do. This release introduces three complementary tool types:

  • Universal Tools provide a normalized set of operations that work consistently across all 350+ connected systems. Instead of exposing hundreds of system-specific tools, agents receive a compact, schema-aware interface ideal for data exploration, ad-hoc analysis, and multi-source reasoning — without tool surface bloat.
  • Source Tools expose tightly defined operations specific to each system. These tools map directly to approved system actions, allowing IT teams to enforce predictable execution, transactional safety, and auditability for production workflows.
  • Custom Tools allow organizations to define purpose-built operations tailored to specific workflows. These tools execute pre-optimized queries with explicit data access limits — reducing token usage, improving performance, and eliminating unintended data exposure.

Workspaces define the data boundary for each agent by specifying exactly which datasets, schemas, or views are accessible. New Toolkits define the action boundary by determining which Universal, Source, or Custom Tools are available. Each Workspace and Toolkit combination can be deployed as a dedicated MCP server, ensuring that agents operate only within their intended scope; reducing context noise, strengthening governance, and delivering enterprise-grade control over agent behavior.

Control: SCIM and Custom OAuth Applications

Connect AI enforces per-user authentication with native source-system permissions applied dynamically at runtime, backed by full audit trails. New governance enhancements include SCIM 2.0 for automated identity lifecycle management and Custom OAuth Applications that enable organizations to use first-party credentials to meet internal security and compliance requirements. Every query is authenticated, authorized, and auditable.

The 25% Accuracy Gap: Why Architecture Matters

MCP is becoming the default interface between AI agents and business software. But how accurately do MCP providers actually return data? To find out, CData tested five MCP providers, representing the major architectural approaches in the market, across four sources (CRM, project management, data warehouse, and ERP) using 378 real-world prompts. Every response was scored against pre-established ground truth. No partial credit.

The results revealed a significant accuracy gap. CData Connect AI achieved 98.5% accuracy (67 of 68 correct responses). The other providers ranged from 65% to 75%—failing on one out of every three to four queries. The failures weren’t random: they clustered around relative date logic, multi-filter queries, semantic interpretation of business terms, and write operations, exactly the kinds of tasks AI agents need to perform reliably every day.

For organizations moving beyond copilots toward autonomous agents that read, write, and act on live business data, this gap is decisive. At 75% accuracy, an AI agent fails one out of every four actions. And that inaccuracy compounds: 75% accuracy across a five-step workflow means less than 24% of processes complete successfully. A 75% accuracy rate becomes a 75% failure rate.

Most MCP providers translate natural language directly into API calls, which works for simple lookups but breaks down when queries require date math, multi-condition filtering, or platform-specific business logic. Connect AI uses a relational abstraction layer with semantic intelligence that understands entity relationships, business conventions, and workflow rules. That’s why it maintained near-perfect accuracy across every platform tested, including ERP, where the vendor’s own native MCP server failed completely.

View the full benchmarking methodology and results here: https://www.cdata.com/lp/ai-accuracy-whitepaper/

Organizations deploying AI in production need an accuracy rate that prevents autonomous agents from creating more cleanup work than they save. Connect AI is built to clear that bar because connectivity, context, and control aren’t just platform features. They’re what makes accuracy at scale possible.

CData at Gartner Data & Analytics Summit

CData will be at the Gartner Data & Analytics Summit at Booth #308, where attendees can connect with the team and see the latest in universal data connectivity.

Speaking Session: AI Agents and the Future of Digital Work with Microsoft — CData Chief Product Officer Ken Yagen will take the stage alongside Microsoft Partner Director of Product Management James Oleinik on Wednesday, March 11 (11:15–11:45 AM EDT). The session will present a joint blueprint for moving from AI pilots to production-ready agentic AI, exploring how Copilot Studio and universal data connectivity can deliver the governed infrastructure enterprises need as Gartner predicts 40%+ of agentic AI projects will be canceled by 2027 without the right architecture in place.

Supporting Resources

  • The 25% Accuracy Gap: MCP Provider Performance Across Enterprise Workloads — CData’s benchmark of five MCP providers across 378 enterprise queries found a 25+ percentage point accuracy gap, with CData Connect AI achieving 98.5% accuracy compared to 65–75% for other providers. Download the whitepaper: https://www.cdata.com/lp/ai-accuracy-whitepaper/
  • The State of AI Data Connectivity Report: 2026 Outlook — Based on research with 200+ data and AI leaders and insights from AI pioneers at Microsoft, AWS, and Google, CData’s report found that only 6% of enterprises consider their data infrastructure fully ready for AI — establishing a direct link between data infrastructure maturity and AI success. Download the report: https://www.cdata.com/lp/ai-data-connectivity-report-2026/

¹ Gartner, Inc., “Gartner Says Worldwide AI Spending Will Total $2.5 Trillion in 2026,” Gartner.com (Jan. 15, 2026), accessed Feb. 20, 2026, https://www.gartner.com/en/newsroom/press-releases/2026-1-15-gartner-says-worldwide-ai-spending-will-total-2-point-5-trillion-dollars-in-2026
GARTNER is a trademark of Gartner, Inc. and/or its affiliates.

How the February 28 Strikes Triggered a New Wave of AI-Assisted Attacks on US Critical Infrastructure

Posted in Commentary with tags on March 9, 2026 by itnerd

CloudSEK has posted a pair of research reports that are highly relevant to the cyber dimension of the Iran-US conflict, especially in light of developments since the February 28 strikes.

Following the February 28 US-Israel strikes on Iran, CloudSEK has documented an immediate and significant surge in Iranian-aligned cyber activity targeting US critical infrastructure, with AI now acting as a direct force multiplier for threat actors.

The key findings:

  • Over 60 Iranian-aligned hacktivist groups activated on Telegram within hours of the February 28 strikes, the largest single-event mobilization of this ecosystem ever recorded.
  • An Electronic Operations Room was formed on Telegram to coordinate attacks, operating on ideological initiative rather than central state direction, which makes activity harder to predict and constrain.
  • More than 40,000 US industrial control systems are currently reachable on the public internet, many with default or no credentials, representing an immediately exploitable attack surface.
  • CloudSEK researchers demonstrated that an actor with no prior ICS knowledge can move from intent to a working list of accessible US industrial targets in under five minutes using AI tools and passive reconnaissance. No scanning, no exploitation, no specialist knowledge required.
  • The same AI platforms now embedded in US defense operations are accessible to threat actors for offensive reconnaissance, creating a dual-use dynamic that significantly widens the threat.

Both reports are primary-sourced, technically detailed, and directly tied to the current conflict escalation. The full write-ups are here:

Report 1: AI, the Iran-US Conflict, and the Threat to US Critical Infrastructure
https://www.cloudsek.com/blog/ai-the-iran-us-conflict-and-the-threat-to-us-critical-infrastructure

Report 2: Threat Actor Landscape Assessment of ICS/OT Targeting in the 2026 Iran-US Conflict
https://www.cloudsek.com/blog/a-threat-actor-landscape-assessment-of-ics-ot-targeting-in-the-2026-iran-us-conflict-and-the-scale-of-the-risk

ESET Opens 2026 Women in Cybersecurity Scholarship Applications Across Canada on International Women’s Day

Posted in Commentary with tags on March 9, 2026 by itnerd

ESET today announced the opening of applications for its Women in Cybersecurity North American Scholarship, launching on International Women’s Day in alignment with the 2026 theme, #GiveToGain. Now entering its 11th year, the program continues ESET’s longstanding commitment to support and empower women pursuing careers in cybersecurity through financial assistance, mentorship, and community-building.

Originally established in 2016 in the United States and expanded to Canada in 2021, ESET’s Women in Cybersecurity Scholarship was one of the earliest initiatives of its kind in the industry. In Canada alone, the program has awarded more than $50,000 to 14 women, expanding from one $5,000 award in its first year to $15,000 across three scholarships today. Many recipients have gone on to build successful careers in cybersecurity and technology.

The need for continued action remains clear. According to the most recent (ISC)² Cybersecurity Workforce Study, approximately 22% of the global cybersecurity workforce is comprised of women, a sign of gradual progress but continued underrepresentation across the industry. In Canada, women account for 21.2% of cybersecurity professionals, underscoring the need for initiatives to expand access and strengthen the talent pipeline. As emerging technologies like AI reshape the threat landscape, a diversity of perspectives is critical to developing ethical and effective solutions.

For the 2026 application cycle, ESET Canada will award three $5,000 awards to applicants demonstrating strong technical aptitude, leadership potential, and a commitment to cybersecurity.

DETAILS AND HOW TO APPLY

Applications are now being accepted for the 2026 round and submissions must be received by 11:59 p.m. PT April 8, 2026. Applicants can learn more about the scholarships and submit their application by visiting ESET’s dedicated webpages. If you’re a Canadian student, apply here. Questions? Email us at CA-scholarship@eset.com [Canada-only inquiries] with any questions.

Ubitium tapes out universal processor to end embedded computing complexity crisis

Posted in Commentary with tags on March 9, 2026 by itnerd

Ubitium today announced the tape-out of its first silicon on Samsung Foundry’s 8nm process. The tape-out was completed in December 2025. The chip is the first universal RISC-V processor to replace the stack of specialized processors used in modern embedded systems.

Embedded computing, a $115 billion market, has reached a breaking point. Cars once ran on one processor; today’s vehicles contain more than 200, each with its own toolchain, software stack and supplier. Performance is no longer the only limiting factor. Complexity is. As AI workloads move into robots, drones, and industrial machines, this complexity becomes unsustainable.

Ubitium builds on RISC-V, the open-source architecture already used in billions of chips worldwide and extends it beyond a conventional CPU. Its universal processor runs Linux and RTOS simultaneously, handles radar and audio signals in real time, and executes neural networks for inference at the edge, without separate accelerators or coprocessors. Full RISC-V software compatibility preserved. 

Ubitium does for embedded compute what software-defined radio did for wireless: replaces fixed-function hardware with one reconfigurable silicon. The result: embedded systems that ship faster, cost less, and have long product lifecycles.

Ubitium is working with Samsung Foundry, Siemens Digital Industries Software and ADTechnology as it advances toward production silicon.

Ubitium’s founders have spent decades building programmable architectures and the software stacks that unlock them at scale. CTO Martin Vorbach created PACT XPP, an early commercial reconfigurable processor, and holds 200+ processor-architecture patents. The core team combines deep industry experience from Intel, Texas Instruments, Apple and NVIDIA, with 350+ peer-reviewed publications.

The tape-out validates the foundational components of Ubitium’s architecture: the Universal Processing Array with runtime reconfiguration and LPDDR5 memory interface. A second tape-out is targeted for later this year, with volume production in 2027.

Technical Notes

  • Workload coverage: Ubitium’s universal processor spans general-purpose computing, real-time signal processing, and massively parallel AI inference on a single die; in a homogeneous architecture
  • Software stack: Full Linux and RTOS support, standard RISC-V toolchains, and compatibility with modern software frameworks. No need for proprietary languages or vendor-specific compilers.
  • Target applications: Radar and multi-sensor signal chains, real-time audio and voice, computer vision, edge AI, automotive cockpits, industrial HMI.
  • Runtime adaptability: The Universal Processing Array shifts execution mode at runtime (CPU, DSP, GPU, parallel accelerator) without context-switch penalty or external offload.
  • System consolidation: One processor, one toolchain, one qualification cycle. Reduces BOM cost, board complexity, and supplier dependencies across product lifecycles.

Today Is International Women’s Day

Posted in Commentary on March 8, 2026 by itnerd

International Women’s Day 2026 is being celebrated today under the theme “Give To Gain,” emphasizing support, collaboration, and gender equality. Since this is a tech blog, I reached out to a pair of women in tech to get their views on this important day.

Margaret Hoagland, VP, Global Sales & Marketing, SIOS Technology

“On International Women’s Day, we honor the courage of women like Anita Hill, Ruth Bader Ginsburg, and Malala Yousafzai—whose bravery and sacrifice reshaped the future for women everywhere. Their leadership expanded rights, opportunity, and voice. But progress is not permanent. Without our continued vigilance and action, the gains they fought for can be eroded. Let us honor their legacy not only with words, but with sustained action to protect and advance equality for the next generation.”

Betsy Doughty, Vice President of Partner Marketing, Hammerspace

Gender equality advances when we choose to build it – deliberately, consistently, and together. Throughout my career, whether leading employee resource groups, running WILD (Women Inspiring Leadership Development), mentoring women at CU Leeds, or learning from mentors myself, I’ve seen that progress doesn’t happen by accident; it happens through intentional connection. The theme Give to Gain reflects what I’ve experienced firsthand: when we give time, advocacy, and opportunity, we gain perspective, growth, and stronger communities in return. What I’ve experienced firsthand is that when we give time, advocacy, and opportunity, we gain perspective, growth, and stronger communities in return. Nowhere is that more evident than in mentorship and networking, and particularly women learning from other women.

Mentorship changed everything for me. Early in my career, mentors recognized my potential before I could articulate it myself. They listened, advocated, and created opportunities that altered my trajectory. They showed me that great mentors don’t hold talent in place – they help it move forward. Over time, I stepped into mentoring roles of my own, offering guidance, opening doors, and supporting women at pivotal moments in their careers. What surprised me most was how much I gained in return: clarity, self-reflection, fresh perspective, and the privilege of watching confident, capable leaders emerge. You don’t need to be at the peak of your career to mentor; you simply need to share what you’ve learned so far.

Networking plays a similarly powerful role. For women, especially, access to networks builds visibility, confidence, and a sense of belonging. Creating intentional spaces for connection fosters shared language around growth and leadership, turning individual success into collective momentum. For me, Give to Gain is not an abstract idea—it’s a lived experience. Every time we choose to lift one another as we climb, we strengthen not just individual careers, but the foundation for lasting gender equality.