Arcitecta Transforms Growing File Management with Unparalleled Speed and Efficiency for Live Broadcast, Sports Production and Media Entertainment Organizations

Posted in Commentary with tags on March 31, 2025 by itnerd

Arcitecta, a creative and innovative data management software company, today announced its latest solution, Mediaflux® Real-Time, that enables workflow acceleration, empowers remote collaboration and minimizes downtime with unmatched speed and efficiency for hybrid production environments. With Arcitecta’s new Mediaflux Real-Time, organizations can create more flexible workflows and utilize “edit anywhere” capabilities to deliver faster content delivery and results for live broadcasts, sports production and media entertainment. Arcitecta and Dell Technologies will showcase the Real-Time solution, combined with Dell PowerScale and ECS, in the Dell Technologies booth #SL4616 at the NAB Show, April 6-9, 2025, at the Las Vegas Convention Center.

In fast-paced environments such as live sports production, broadcast and media entertainment, editors often need to access live, growing video files as they are recorded. Traditionally, this workflow relies on accessing files from a single location, which can create bottlenecks and delays. Today’s hybrid production environments demand immediate access to content for live productions and rapid post-event workflows. Editors working remotely often experience delays due to slow transfers and playback speeds, which extend the time to the final product. 

Bottlenecks and delays result in lost revenue, compromised product quality and decreased competitive advantage. With Arcitecta Mediaflux Real-Time, production workflows gain unparalleled speed, flexibility and efficiency. Ideal for live sports, broadcast, hybrid production environments and more, the solution supports real-time editing, removes workflow bottlenecks and enhances remote collaboration. Customers gain a competitive edge with faster content delivery and seamless media management. Real-Time also eliminates the need to buy and configure dedicated streams or connections to each editing location, requiring only a single stream to transfer the data to multiple sites – reducing cost and infrastructure requirements. 

The Mediaflux Real-Time solution eliminates bottlenecks and delays, enabling teams to work faster and smarter:

  • Edit anywhere: No longer tethered to event locations, editors can access growing files from any site, enabling real-time collaboration across multiple locations.
  • Fast turnaround: Remote editors can create highlight reels or edit live footage almost instantly, dramatically cutting post-production time.
  • Smoother workflows: Content can be played back in real-time across sites, ensuring faster workflows and higher productivity, and reviewed as it is rendered. 

Optimizing Growing File Management

As organizations scale, managing growing file volumes presents several challenges. Storage and organization become increasingly complex, making file retrieval inefficient without proper metadata and indexing. Large file transfers can strain network bandwidth, slowing performance and causing potential downtime. Collaboration bottlenecks arise when multiple users work on the same files, leading to versioning conflicts and duplication. Security risks also increase, with greater exposure to unauthorized access, data breaches, and compliance issues. Additionally, unchecked data growth drives up storage costs, requiring cost-effective solutions to balance performance and budget constraints.

The Mediaflux Real-Time is hardware, file-type and codec agnostic. It delivers centralized content management, network optimization, collaboration tools, security and cost efficiency, enabling organizations to:

  • Organize storage and metadata for easy access and retrieval.
  • Ensure reliable infrastructure for handling large file transfers.
  • Use version control and integrated feedback systems to enhance teamwork.
  • Share content with multiple locations in real time and continue to grow the file with live content
  • Protect sensitive files with encryption and access controls while optimizing storage usage.

In its recent Data Sheet, Dell Technologies noted, “It shouldn’t matter where these data workflows occur – joint solutions from Arcitecta and Dell Technologies deliver data where it’s needed at the right time. Arcitecta’s pioneering metadata and data orchestration tools coupled with Dell Technologies’ powerful, industry-trusted infrastructure enable a global distributed edge that stays simple and performant, no matter the complexity of your workflows.”

Pricing and Availability

Mediaflux Real-Time is available immediately. It is part of the Mediaflux and Livewire suite of solutions and works seamlessly with virtually all data storage and infrastructure solutions and protocols.  

NAB 2025: Arcitecta + Dell Technologies, Better Together

Arcitecta and Dell will showcase the Mediaflux Real-Time solution, in combination with Dell PowerScale and ECS, in the Dell Technologies booth #SL4616 at the NAB Show, April 6 – 9, 2025, at the Las Vegas Convention Center. To schedule a meeting and see a demonstration of Mediaflux Real-Time, contact Arcitecta at https://www.arcitecta.com/events/2025/nab-show/chat/.

Today Is World Backup Day

Posted in Commentary on March 31, 2025 by itnerd

World Backup Day  (#WorldBackupDay!) is today. It began in 2011 as a simple reminder from a group of Reddit users who had seen too many people lose their important files… family photos, work documents, personal projects, because they didn’t have backups. They wanted to spread the word in a way that would stick, so they picked March 31, the day before April Fool’s, with the message… “Don’t be a fool – back up your data!” 

What started as an internet joke quickly became a worldwide movement. Tech companies, IT professionals, and even everyday people started sharing stories of data disasters – hard drives crashing, phones getting lost, files disappearing – and the relief that comes from having a backup. Now, every year, it serves as a friendly wake-up call to take a few minutes and make sure the things that matter most – your photos, videos, work, and memories – are safe, no matter what happens.

Executives from DH2i, Leaseweb USA, Leaseweb Canada, Cerabyte, Active Archive Alliance, Arcitecta, Peer Software, Hammerspace, and Other World Computing (OWC) had this to say about this important day: 

Don Boxley, CEO and Co-Founder, DH2i:

“World Backup Day is a great reminder that just having backups isn’t enough. Sure, they’re critical for recovery, but they don’t keep your business running in real-time. If something goes wrong – whether it’s a system crash, a cyberattack, or just someone making an honest mistake – you need more than a backup. You need a plan that keeps your data within reach and your business running like nothing ever happened.

Because here’s the thing… when downtime happens, waiting around for a backup to restore isn’t an option. Businesses need to stay up and running, no matter what. That means thinking beyond just storing copies of data and making sure it’s always accessible, secure, and easy to recover. At the end of the day, it’s not just about backing up – it’s about making sure you never have to hit pause in the first place.”

Richard Copeland, CEO, Leaseweb USA, Inc.: 

“Skipping backup isn’t just a bad idea – it’s a ticking time bomb. Many companies think they’re saving money by relying on hardware redundancy or high availability, only to get blindsided when their data vanishes. One wrong click, one system crash, or one ransomware attack, and suddenly, they’re in full-blown disaster mode, scrambling to recover what’s lost. No backup? No safety net. Just downtime, financial hemorrhaging, and a whole lot of regret. 

A proper backup strategy isn’t some nice-to-have – it’s your last line of defense when things go sideways. The smart play? The 3-2-1 rule: three copies of your data, in two different locations, with one offsite or in the cloud. Don’t just assume your backups work – test them, because the worst time to find out your safety net has holes is when you’re already falling. Skipping backup might save a little cash upfront, but when disaster strikes, you’ll be paying for it ten times over.”

Roger Brulotte, CEO, Leaseweb Canada

“Imagine waking up to find your systems are locked, your data is inaccessible, and your customers are left in the dark – in other words, your business is at a complete standstill. Whether it’s a cyberattack, a hardware failure, or just plain human error, losing access to critical information can be catastrophic. Backup isn’t just a checkbox – it’s your safety net. Without a solid backup strategy, a single incident could cost you days of productivity, millions in revenue, and, in worst case scenarios, your entire business.

But here’s the kicker… not all backups are created equal. Cybercriminals know to target your backups first, embedding ransomware that lies dormant until it’s too late. That’s why businesses need to follow backup best practices. For instance, you can implement the 3-2-1 rule (three copies of your data, on two different media, with one stored offsite and if one or more of the backups are immutable – i.e., cannot be altered – all the better). This can be enhanced with what some refer to as CTAM, otherwise known as the Chevy Truck Access Method. All kidding aside, this step can make or break your backup strategy. You must make sure you keep an air-gapped offline backup that can be leveraged as a last line of defense. 

A strong DR plan doesn’t mean just having backups – it means knowing they’ll work when you need them most. However, don’t worry – if this isn’t your forte, there are experts that can help you craft, implement, and/or manage your backup and DR. It is an investment that pays for itself many times over. After all, in today’s world, it’s not if disaster will strike, it’s when… when will be the first time, and the next, and the next…” 

Larry O’Connor, CEO and Founder, Other World Computing

“If you’re a creative or a business owner, your data isn’t just files; it’s your work, your ideas, your late nights and early mornings. It’s everything you’ve built. Now, imagine waking up one day and it’s all gone. No photos, no projects, no client records. Just… gone. It’s the kind of thing you assume won’t happen to you… until it does. That’s why World Backup Day is a good gut check. A solid backup plan isn’t about expecting disaster, it’s about making sure that no matter what – whether it’s a hardware failure, a cyberattack, or just a simple mistake – you don’t lose the work that matters most. 

But let’s be honest… having a backup doesn’t mean much if it’s not reliable. That’s where the right tech and strategy come in. A well-planned and executed strategy for your backups mean you’re not relying on memory, and the right tech-enhanced with the right strategy ensures you’re protected no matter what. The goal isn’t just to back up your data; it’s to have a system you can actually trust and know it just works. Because when you know your work is safe, you can stop worrying about ‘what if’ and focus on doing what you love. So, if you haven’t checked your backup setup in a while, take a few minutes today. Future you will be grateful.”

Molly Presley, SVP of Global Marketing for Hammerspace

“The importance of automation in protecting and backing up data across a company’s global infrastructure is increasing with the rise of cyber-attack threats, data breaches, and unrelenting data growth, underscoring automation’s crucial role in data management and cybersecurity. 

Managing vast unstructured data across diverse storage systems, multiple global locations, and cloud platforms requires considerable effort and resources. Relying on manual processes is increasingly time-consuming and risky, exposing critical data to human error and missed backups. 

By implementing global-level data protection services, organizations will defend global datasets and maximize their value through automated policies. As organizations become increasingly driven by artificial intelligence, where data is essential to accurate analysis and drive informed decisions and innovative breakthroughs, automation is becoming indispensable. 

Automated data protection policies bolster enforcement across distributed geographies, strengthening an organization’s data resiliency and business continuity. They also enable organizations to manage their global data environments and maintain the efficacy of their AI systems and data pipelines. 

A streamlined, policy-driven data management approach can transform how organizations manage and protect data by distinguishing newly created data, ensuring global data protection across distributed locations, automating data copy creation controls and services, and enforcing compliance with corporate governance standards.”

Jimmy Tam, CEO, Peer Software

World Backup Day serves as a crucial reminder that data resilience isn’t just about having a copy of your data, it’s about ensuring business continuity with minimal disruption. Many organizations still rely on centralized storage models, but these systems pose risks. A single point of failure, slow recovery from outages, and the increasing complexity of modern data environments demand a re-evaluation of storage strategies. The rise of distributed storage models, which keep data where it is created and used most, provides an opportunity to enhance resilience. However, simply decentralizing data isn’t enough. Businesses must also adopt robust data orchestration strategies to ensure efficient access, security, and performance. As data volumes grow and compliance demands become more stringent, companies must rethink how they store, manage, and protect their critical assets to minimize downtime and financial loss.

By understanding data flows, leveraging AI-driven storage optimization, and ensuring strong security measures, organizations can build a storage infrastructure that withstands disruptions and safeguards business operations. This World Backup Day, organizations need to take the time to evaluate their storage strategy because the cost of downtime is too high to ignore.

Martin Kunze, co-founder and CMO of Cerabyte

“In a world where every digital moment carries weight, World Backup Day is more than a reminder to protect our files – it’s a call to safeguard the digital legacy that shapes our era and our society. True data preservation isn’t just about storage; it’s about ensuring that today’s knowledge, culture, and discoveries remain accessible for generations to come.

Information is the spine of our society, and it is threatened more than ever. Preserving this legacy demands more than traditional backup methods. It requires a future-proof strategy that resists degradation, overcomes obsolescence, and guarantees permanent access. This isn’t just about saving data. It’s about securing digital immortality.”

Jason Lohrey, CEO and Founder of Arcitecta

“It’s estimated that there will be more than 180 zettabytes of data in the world by the end of 2025. With the scale of data continually growing, making it secure and resilient is becoming harder to achieve. How do organizations backup hundreds of petabytes of data? The answer is they don’t, with traditional backup, and that’s precarious. Vulnerabilities scale with data growth: corruption, malware, accidental deletion, mysteries, and the list goes on. Furthermore, the time it takes to find lost data with traditional backup systems increases with the amount of backup data stored. IT departments are constantly pulled into the task of data recovery. Data resilience for trillions of datums, and instant, self-serve data recovery is not possible with backup as we know it.

The process of recovery is not what it should be – it’s tedious and slow. Traditional backup works by scanning a file system to find and create copies of new and changed files. The problem is scanning takes longer as the number of files grows – so much so that it’s becoming impossible to complete scans within a reasonable time frame. They usually run during the night when systems are likely to be less volatile. The process occurs at set intervals, which means any change before the next scan will be lost if there’s a system failure. Traditional backup cannot and does not meet the objective of zero data loss.

New approaches are emerging that enable continuous data availability as a strong first line of defense against cyber threats, enabling organizations to recover compromised data easily and almost instantly. Continuous data availability is a game-changing form of protection that actively records every significant change in real-time for every file so a user can go back to any point in time to retrieve data – easily and without the assistance of IT. This approach merges the file system and backup as one entity. As a result, every change in the file system can be recorded as it happens, making it seamless to retrieve lost or deleted data, regardless of when it existed and across the entire time continuum. Organizations will increasingly leverage continuous data availability technology to protect data from loss and cyber threats.”

Rich Gadomski, Co-Chairperson of the Active Archive Alliance

“Effective data management software can help organizations not only optimize storage and backup but also enhance cybersecurity. By moving inactive data onto active archive media, organizations reduce the risk of malware infecting their primary storage. Media technologies, such as tape, offer powerful, easy-to-deploy air-gap defenses where IT personnel can establish a literal separation from any online path to prevent unauthorized electronic access. 

For many data centers, the archive copy is often the only copy of archival data exposing it in the case of a data loss event. Since the business value of untapped archival data is increasing, especially with the rapid rise of AI, creating a second, secure air-gapped copy in a different geographic location will soon become a standard data protection strategy. Storage administrators often leverage the 2-1-1 Archive Strategy for backup, recovery, and disaster recovery to protect their primary archival storage: 

  • Create a second (2) archival copy of the data
  • Ensure at least one (1) of the copies is stored at a different physical location 
  • Store at least one (1) of the copies offline 

While cybersecurity software serves as a first line of defense against malware, organizations must always be prepared for the possibility of a successful attack. As massive data growth expands the attack surface, having a robust data protection and backup strategy is essential to ensure your data assets remain secure, protected, and recoverable.”

UPDATE: I have received additional comments on World Backup Day

Stephen Bacon, Vice President, Data Protection and Cyber Resilience, HPE  

World Backup Day is an annual reminder to protect your data and, with that, your customers, your employees, and your organization. While the heritage of the day is backup, there is far more to data protection these days to address new threats, new workloads, and new regulations around the world. It is also about cyber resilience, rapid recovery from backup, and seamless disaster recovery to keep data safe and organizations operational no matter what. 

This year, let World Backup Day serve as a crucial reminder that backup alone is no longer enough. Organizations need a comprehensive, multi-layered approach that spans from edge to cloud and source to target, including storage array-level ransomware protection, cyber vaulting, and disaster recovery everywhere your critical workloads run. 

Chris Girard, Sr. Director of Product Management, VDURA 

Today, on World Backup Day, we recognize the evolving landscape of data storage infrastructure within HPC. The industry’s shift toward complex computational tasks and AI-driven innovation necessitates advanced approaches to data storage. 

In HPC, efficient checkpointing is critical—allowing for immediate recovery and minimizing downtime during intensive computations. Equally critical in AI development, the primary form of backup begins at the model checkpoint, which must support frequent, rapid saves to avoid data loss and facilitate smooth model iteration. Prioritizing instant recovery minimizes downtime for clusters engaged in training next-generation AI, and these solutions expand to support the growth of AI development environments. 

As AI continues to redefine the future, we must recognize the importance of speed of saving and retrieving data, and data resilience, keeping data available and durable for AI modeling. 

Bruce Kornfeld, Chief Product Officer at StorMagic 

“One debate we’re currently seeing in the backup industry is agent-based versus agentless backup. In recent years agentless backup has become more popular, especially for virtual environments, because it doesn’t require backup agent software to be installed on each virtual server. With more complex environments than ever before, having agents on each VM can add administrative overhead to an IT department (the need to keep all of the agents updated). The biggest players in virtualization software – VMware, Microsoft, Nutanix – have all worked with many of the backup software providers over the years to develop custom integrations for agentless backup. It’s become the norm.   

“However, Broadcom’s acquisition of VMware nearly 18 months ago has led many customers to rethink their virtualization strategy and consider alternative hypervisors that deliver the capability they need but are much more cost-effective than staying with VMware. But moving to a virtualization solution outside of these “big 3” means considering a more open, agent-based approach to backup. The backup software providers don’t have the resources to work with alternative hypervisor providers to do the custom engineering work needed to for agentless backup integration.  

So what should IT departments do that want to save money and move off of VMware? They simply shift to an agent-based approach. This is how backup has been done for decades and, crucially, will work with any hypervisor. All backup software providers have agents available that typically deliver the same functionality as agentless for the same cost, making it a very valuable alternative with the flexibility and ROI that today’s businesses require.”  

Apple Is Going To Get Sued In Canada Over Apple Intelligence

Posted in Commentary with tags on March 30, 2025 by itnerd

Apple’s problems with Apple Intelligence go from bad to worse it seems. After this lawsuit dropped over Apple Intelligence not showing up on iPhones and other iDevices as promised comes this:

A Canada-wide class-action lawsuit has been launched in B.C. alleging Apple Canada engaged in misleading advertising when it marketed the iPhone 16, by promising it would include innovative artificial intelligence features that it did not have.

The “pervasive” marketing campaign included “misrepresentations and/or misleading statements” that the iPhone 16 would be equipped with its new Apple Intelligence, to induce consumers into buying, according to the notice filed in B.C. Supreme Court.

“As such, consumers paid an unlawful price premium for the … iPhone 16 model smartphone that they did not need, based on artificial intelligence features that did not exist,” it alleged.

The suit names Apple Inc. and Apple Canada as defendants.

Now the usual disclaimer of these claims have not been tested in court applies as always. But boy, Apple really is not in a good place here. These are lawsuits that really don’t help their cause in any way. Thus you have to wonder what the brain trust at Apple Park are going to do. I say that because you have to assume that other jurisdictions will have, or have had similar lawsuits pop up. Which means that this has the potential to get very ugly for Apple.

Get that popcorn ready.

SOCRadar’s CISO Comments On The Oracle Cloud Data Breach

Posted in Commentary with tags on March 29, 2025 by itnerd

A threat actor using the alias “rose87168” claimed responsibility for breaching Oracle Cloud systems, allegedly stealing 6 million user records containing encrypted passwords, authentication keys, and directory credentials. Oracle has denied any breach occurred, stating no customer data was compromised.

To investigate these claims, SOCRadar contacted the threat actor, who provided the below 10,000-record sample. This dataset appears consistent with real Oracle Cloud user information, including structured fields like user IDs, encrypted credentials, and company-specific domains. While SOCRadar cannot confirm the full 6 million record claim, the sample’s format and content seem legitimate and not easily fabricated.

According to Ensar Seker, CISO at SOCRadar:

“Several other security researchers and vendors have also analyzed the sample. At least three Oracle Cloud customers reportedly confirmed their information was present in the leaked data, further supporting its authenticity. These confirmations, along with observed Indicators of Attack (IOAs) such as irregular logins and suspicious file activity, suggest that the breach may indeed be real.

The hacker continues to provide screenshots and additional data fragments to prove the claim. The screen shot illustrates structured user data likely sourced from an identity management system. The actor also claims to have exploited a known vulnerability (potentially CVE-2021-35587), though this has not been confirmed.


Despite the mounting evidence, Oracle maintains its stance that no breach occurred. The company has provided no technical explanation or alternative theory for the leaked data’s origin. This leaves many Oracle Cloud customers in a difficult position—unable to fully assess their exposure without further guidance.

In cybersecurity, even unconfirmed incidents should be treated with seriousness when multiple independent sources identify potential compromise. We recommend organizations remain vigilant, monitor their environments closely, and follow trusted updates from Oracle and the security community.

We urge all Oracle Cloud users to take precautionary steps, including:

  • Reviewing security logs from mid-February onward for unusual login attempts or access patterns.
  • Auditing user accounts, especially those with administrative privileges.
  • Rotating sensitive credentials such as SSO and LDAP passwords or keys.
  • Ensuring multi-factor authentication (MFA) is enabled across all accounts.”

Much as I said in this post, this might be the breach that we’re all talking about in 2025. So far, my hunch on this is proving correct.

Over 200 Million Records Allegedly Belonging to X/Twitter Leaked

Posted in Commentary with tags on March 29, 2025 by itnerd

Recently, the Safety Detectives Cybersecurity Team stumbled upon a forum post on the clear web where a threat actor posted a link to a CSV file containing over 200 entries with information allegedly belonging to over 1 Million X/Twitter users

You can see their full report here: https://www.safetydetectives.com/news/x200m-leak-report/

ALIEN TXTBASE data-dump analysis: Dangerous or junk?

Posted in Commentary with tags on March 28, 2025 by itnerd

Today Specops Software published an analysis digging into the ALIEN TXTBASEdata-dump, which was recently merged into the HaveIBeenPwned (HIBP) dataset by Troy Hunt. 

As with the Rockyou2024 data dump last year, Specops Software researchers found that this dump isn’t quite the mega-leak it was initially hyped as. The ALIEN TXTBASE dump contained a pretty standard distribution of base words, passwords, and lengths – essentially a lot of peoples’ local password stores. There was a non-zero amount of junk, telegram URLs, and other stuff mashed in there too. It’s clear this is someone collecting and processing a lot of stealer logs into one.

However, 20 million of the breached passwords were new to the Specops Breached Password database. 

For the full findings, the analysis can be read here: https://specopssoft.com/blog/alien-txtbase-data-dump-analysis/

Facebook Ban Test Drives 1,900% VPN Surge in Papua New Guinea

Posted in Commentary with tags on March 28, 2025 by itnerd

Recently VPNMentor published a report about an alarming increase of VPN demand in Papua New Guinea after the government shut down Facebook as a “test” conducted under the country’s anti-terrorism laws.

Their research team conducted an analysis of user demand data in PNG observing a 1,900% spike during the duration of the test.

You’ll find their report here: https://www.vpnmentor.com/news/papuanewguinea-vpn-surge/

Samsung Introduces The Galaxy A36 5G

Posted in Commentary with tags on March 28, 2025 by itnerd

Samsung today unveiled Galaxy A36 the latest Galaxy A series smartphone. For the first time, the Galaxy A series is integrating Awesome Intelligence—including some of Galaxy’s fan-favorite AI-powered features to reimagine creativity — while bringing, as well as robust security to provide a secure mobile experience.

Awesome Intelligence is the first comprehensive mobile AI exclusively available on Galaxy A36 5G and brings users powerful, fun and easy-to-use AI tools. Powered by One UI 7, the new Awesome Intelligence features bring amazing search and visual experiences to Galaxy A series users.

A fan-favorite on Galaxy A series devices last year, Google’s enhanced Circle to Search, makes it easier than ever to search and discover from the phone’s screen. With the latest upgrades, the search feature is faster and more contextual, now recognizing phone numbers, email and URLs on the screen and helping users perform actions with a single tap. The update also introduces Song Search, which can identify music playing nearby, on the device, or even from a user’s own voice when they hum or sing. With support for multiple languages, Song Search makes it effortless to find a tune with users no longer needing to wait for that song title to finally come to them.

The Galaxy A series also takes the camera experience to a new level with creator-focused tools, starting with a powerful triple-camera system featuring a 50MP main lens on all devices and 10-bit HDR front lens recording on the Galaxy A36 5G for bright and crisp selfies.

Galaxy A36 5G bring fine-refined Object Eraser, allowing users to remove unwanted distractions from photos. Whether it’s an unexpected passerby or a distracting shadow, users can manually or automatically select objects to erase, achieving a cleaner, more polished final image with just a few taps. Moreover, Filters enables custom filter creation by extracting colors and styles from existing photos for users to apply for a unique and personalized effect depending on mood and taste. With these intelligent tools, users can refine and enhance their photos effortlessly, bringing a new level of creativity to every shot.

With a 5,000mAh battery included throughout the entire lineup, the new Galaxy A series is designed to keep up with users’ daily routines. Galaxy A36 5G support 45W charging power and Super Fast Charge 2.0 technology, delivering even faster charging[5] for extended use[6]. Galaxy A36 5G features the Snapdragon® 6 Gen 3 Mobile Platform. A larger vapor chamber in both devices helps sustain performance, ensuring smooth gameplay, video playback, and effortless multitasking.

Beyond performance, the new Galaxy A series is built to withstand life’s unpredictable moments. Galaxy A36 5G features an IP67 dust and water resistance rating for strong protection against the elements. Additionally, an advanced Corning® Glass cover material adds a layer of protection against scratches and cracks.

Thanks to the integration of One UI 7 on the Galaxy A series for the first time, Samsung is further supporting robust security. With Samsung Knox Vault, the Galaxy A series provides an extra, fortified layer of device security, transparency and user choice – ensuring sensitive data is protected. Equipped with the latest One UI 7 security and privacy features, Galaxy A series users benefit from holistic protection  — including enhancements in Auto Blocker, Theft Detection, More Security Settings and other features.

Pricing & Availability 

The Galaxy A36 5G will be available for purchase starting March 28th

Pricing: 

  • 128GB – $529.99 CAD 

Guest Post: Software Supply Chains & the End of Reactive IT

Posted in Commentary with tags on March 28, 2025 by itnerd

By Tim Flower, DEX Evangelist at Nexthink

Software supply chain disruptions are the biggest danger to business resiliency today. One response: moving past the traditional ‘break/fix’ model of IT Services.

The last year has seen a spate of high-profile outages that have affected thousands of companies and millions of endpoints around the world. While the events have been different in many ways, there is one underlying commonality – in each case, the root of problem is one that doesn’t get much attention: The software supply chains – i.e. all the existing component parts that underpin new software products – that enterprises and suppliers around the world rely upon, and are largely outside the control of internal IT teams.

Software supply chains are the single biggest danger to business resiliency today, with the average enterprise using nearly 1000 different apps [1] and 96% of codebases [2] featuring open source code.

All of this means that there’s no such thing as an ‘isolated incident’ anymore. Even if companies take every reasonable precaution, there is no guarantee that a mistake three steps down the line won’t cause days of unexpected downtime and millions in lost revenue. Even an unknown compatibility issue can lead to significant headaches during a large-scale deployment. Not to mention the hurdles encountered when the supplier changes versions or discontinues support.   

When disaster strikes

The problem is, when – and it is when, not if – major third-party incidents occur, the vast majority of businesses lack the visibility and capabilities needed to swiftly identify and remediate such issues. This is because many IT service delivery teams are using legacy management platforms that don’t allow them to move beyond a traditional, reactionary model of handling tickets one by one when employees decide to call for help. The employees are essentially providing IT monitoring services. This creates multiple problems, including:

  • In the middle of a costly and reputationally damaging crisis, IT teams end up wasting precious time trying to understand the scale of the problem before they can even start to look at how it can be fixed. Indeed, sometimes endpoints can remain out of action for days until an employee opens a ticket with the Help Desk.  
  • A lack of visibility also means that it’s impossible for IT service teams to effectively prioritize their remediation efforts to, for example, get customer-facing services up and running first to minimize external disruption.  –
  • Additionally, it hampers any attempts at communication to give colleagues and clients information about what has happened and when normal service is likely to resume. 

An evolving function

None of this is to say that IT service teams are redundant or unimportant – far from it. Even when things are going smoothly, strong service teams are worth their weight in gold, never mind when a crisis occurs. In fact, as software supply chains become ever more entangled, the need for skilled IT support experts is only going to grow.

The issue is that, all too often, businesses aren’t providing their IT support staff with the necessary capabilities to proactively identify, understand, and mitigate problems. For instance, in the event of a major third-party outage causing a cascade of endpoints experiencing the dreaded ‘Blue Screen of Death’ (BSOD), IT support teams need to be alerted to an unusual spike in system crashes in real-time, which ones are being affected, and insights about what the common root cause might be. 

Armed with this information, IT support can take immediate steps to address the problem – for example by halting any application updates on other endpoints – and reduce the number of those affected by BSOD. And as endpoints are remediated, a platform providing real time visibility can provide immediate status details on which systems still need attention and which ones are back up and running. 

Managing the shift effectively

The surge in third-party software issues is a key driver of the transition away from the traditional ‘break/fix’ model of IT Services and towards something more proactive, but it’s not the only motive. Factors such as a desire to improve regulatory compliance, greater demand for upskilling and training from support workers, and changing ways of working are all key reasons why the transition is gathering pace. There is also a growing awareness that the 40+ year practice of reactionary IT is no longer scalable, and actually poses a risk to business viability.  

Taken together, the increased relevance of these issues demonstrates that there is a huge opportunity for IT services to take a larger and more important role in achieving core business objectives, especially as modern IT environments become ever-more complex. IT needs to be a provider of business-enabling services, and no longer a team of expensive fire fighters.   

The next step is for senior leaders to champion this change by providing support staff with the necessary training and the ability to bring in new, modern capabilities that can transform IT Services from a short-term, reactionary function to one that is central to the operation and success of the entire enterprise.

Tim Flower is VP of DEX Strategy at Nexthink and the author of the Wiley book: DEX for Dummies, a Practical Guide for Organizing and Executing an Effective DEX Strategy in Any Organization

1 2024 Connectivity Benchmark Report: Insights from over 1000 IT Leaders

2024 Open Source Security and Risk Analysis Report

NIST Adds SandboxAQ’s HQC Algorithm to its List of Post-Quantum Cryptography Standards

Posted in Commentary with tags on March 27, 2025 by itnerd

 SandboxAQ has announced that the National Institute of Standards and Technology (NIST) has officially selected HQC (Hamming Quasi-Cyclic) as the fifth algorithm in its suite of post-quantum cryptographic (PQC) standards. Out of these five algorithms, three will be used for signatures. The other two, HQC and ML-KEM, will be the NIST-approved algorithms that will protect the confidentiality of communications across the Internet, cellular networks, payment systems, and more.

The selection of HQC marks SandboxAQ’s second major contribution to NIST’s post-quantum standardization effort, a key step in ensuring the protection of the world’s most critical data. This landmark decision represents a significant milestone in the global transition to a robust, quantum-safe encryption future and further solidifies SandboxAQ at the forefront of cryptographic innovation.

HQC is a key encapsulation mechanism designed to secure the exchange of encryption keys in a quantum-resistant manner. Unlike traditional public-key encryption systems such as the widely-used public key cryptosystem, RSA, and elliptic-curve cryptography (ECC), which quantum computers render obsolete, HQC is built on the well-established mathematical foundation of error-correcting codes, which is not vulnerable to quantum attacks. It provides strong security guarantees while balancing performance factors such as computational efficiency and key size, which are primary considerations for large-scale real-world deployments. In NIST’s final selection report, the HQC algorithm, co-invented by SandboxAQ team members, stood out as a robust and reliable candidate for wide-scale adoption across industries, following multiple rounds of global cryptanalysis and peer review.

Prior to HQC, the SandboxAQ team also played a significant role in the development of SPHINCS+, one of the initial algorithms already selected by NIST as part of its initial set of PQC standards in 2022. With HQC now formally accepted into the standardization process, SandboxAQ has contributed to two of the five critical PQC standards for key exchanges and signatures, demonstrating deep and sustained leadership in quantum-resistant cybersecurity and ushering in a safer digital world.

SandboxAQ has a unique position to improve cryptographic postures and ensure better compliance, fewer outages, and robust cybersecurity. It produces world-class cryptographic research, internationally recognized standards, and widely adopted cryptographic innovations. Leveraging this world-leading expertise, SandboxAQ also offers an industry-leading cryptography management product, uniquely positioning it within the global cryptographic landscape. Our flagship cryptographic offering, AQtive Guard, is trained on billions of cryptographic findings meticulously structured and enriched with supplemental data by our world-class cryptography team. By cross-referencing and augmenting our customers’ inventories, we empower efficient exploration and actionable insights. Leveraging our distinctive AI approach, seamless third-party integrations, and comprehensive 360-degree coverage sensors, AQtive Guard delivers unparalleled visibility and effectiveness for the protection of enterprises and governments.