Archive for Hammerspace

Hammerspace Announces FIPS 140-3 Validation

Posted in Commentary with tags on March 31, 2026 by itnerd

Hammerspace today announced support for FIPS 140-3 validated cryptography, enabling the Hammerspace Data Platform to be configured to meet the U.S. government standard for cryptographic security. This milestone positions Hammerspace to support deployments in federal, defense, healthcare, finance and other highly regulated environments. Integration into the Hammerspace Data Platform is planned for an upcoming release by the end of 2026.

By supporting FIPS 140-3 validated cryptography, Hammerspace meets key requirements for secure data protection in regulated environments and is advancing the integration of these capabilities into the Hammerspace Data Platform.
 

Security Enforced at the Data Layer for Consistent Control, Compliance and Data Sovereignty

Hammerspace delivers consistent, policy-driven orchestration, governance and protection across distributed environments, providing consistent control in multi-site and hybrid-cloud architectures. With the integration of FIPS 140-3 validated cryptography, the platform is designed to provide:
 

  • End-to-End Encryption with FIPS-Validated Security: Support for encrypting data in-flight and at-rest using FIPS 140-3 validated cryptographic modules, aligning with federal security requirements.
  • Built-In Data Protection and Ransomware Resilience: Immutable snapshots, clones and WORM capabilities to enable rapid recovery and protect against unauthorized modification or deletion.
  • Consistent Security Enforcement Across a Global Namespace: Centralized policy enforcement across the global namespace, ensuring consistent protection across sites, clouds and storage systems.
  • Unified Access Controls Across Protocols and Environments: Consistent access policies across file and object data, spanning NFS, SMB and S3.
  • Policy-Driven Data Governance Sovereignty and Orchestration: Metadata-driven data placement policies to control where data resides, how it moves and how it is used in real time.


The Federal Information Processing Standards (FIPS) 140-3 is defined by the National Institute of Standards and Technology (NIST), and establishes stringent requirements for the design, implementation, and validation of cryptographic modules used to protect sensitive data. Validation requires independent testing by accredited laboratories and is mandatory for systems used by U.S. federal agencies and organizations operating under stringent compliance mandates.

Learn more about Hammerspace solutions for the public sector at https://hammerspace.com/public-sector/.

Hammerspace Data Platform Wins 2026 Artificial Intelligence Excellence Award

Posted in Commentary with tags on March 25, 2026 by itnerd

Hammerspace today announced it has been named a winner in the 2026 Artificial Intelligence Excellence Awards in the Internet and Technology category. Presented by the Business Intelligence Group, the award recognizes organizations, products, teams, and individuals that are applying artificial intelligence in ways that drive real, measurable impact.

At a time when AI infrastructure is constrained less by compute than by data bottlenecks, the Hammerspace Data Platform redefines how unstructured data is accessed, orchestrated, and delivered to GPU-intensive workloads – without requiring proprietary clients, new storage silos or disruptive data migrations.

Hammerspace provides the foundation to activate unstructured data at scale, wherever it lives. Instead of forcing enterprises to copy data into yet another AI storage silo, Hammerspace creates a unified global data environment that orchestrates data across existing infrastructure. The result is less manual effort, fewer unnecessary copies, lower operational drag, and a much faster path to production AI.

Built on standard Linux NFS and pNFS v4.2, and advanced through years of upstream kernel innovation, Hammerspace delivers true parallel performance and linear scalability using the native clients already deployed across most GPU environments. That means high-performance data access without proprietary software, infrastructure lock-in, or the operational drag of introducing another specialized storage stack.

Hammerspace also extends performance further with Tier 0, which turns underutilized NVMe inside GPU servers into a shared, ultra-low-latency data tier. Combined with topology-aware data placement, Hammerspace aligns data with compute automatically – keeping GPUs fed faster, reducing bottlenecks, and increasing the efficiency of existing infrastructure.

Together, these outcomes demonstrate that Hammerspace delivers tangible business value: faster AI pipelines, higher utilization of costly GPU resources, lower infrastructure spend and reduced operational complexity – while maintaining flexibility across on-premises and cloud environments.

The Artificial Intelligence Excellence Awards spotlight organizations advancing AI into practical, accountable deployment. The 2026 program recognized winners across 36 industries and more than 15 countries.

Learn More: Hammerspace Data Platform Overview

Hammerspace Launches AI Data Platform Based on NVIDIA Reference Design 

Posted in Commentary with tags on March 17, 2026 by itnerd

Hammerspace announced today the general availability of its new AI Data Platform (AIDP) solution. AIDP is a turnkey approach that removes one of the biggest barriers preventing enterprise AI pilot projects from reaching production: seamless access to distributed enterprise datasets. It does this without creating new copies, performing slow migrations, or relying on manual preparation and curation, dramatically simplifying and securing the process of curating AI-ready data.

The Hammerspace AIDP meets enterprises where they are by allowing them to start making their existing data AI-ready using the infrastructure they already own, without deploying a separate AI storage system. By uniquely leveraging data in place, Hammerspace eliminates the need to purchase massive amounts of new flash just to house AI data. 

Solving the Primary Blockers to Enterprise AI Success
 

Eliminate Data Fragmentation. Identifying, gathering, organizing, and transforming unstructured data into an AI-ready format remains labor-intensive and highly manual. In most enterprises, the same work–finding the right data, enriching metadata, and shaping it into a form AI agents and models can use–is repeated across teams, projects and platforms because the data estate is fragmented. Hammerspace eliminates data fragmentation by providing a unified view across heterogeneous systems and automating the entire pipeline that produces AI-ready data for applications.
 

Skip Costly Mass Migrations. By enabling customers to use data in place, Hammerspace eliminates tedious migrations and the heavy manual work behind copy-first pipelines that consume human capital and stall initiatives. Instead of requiring a new AI storage buildout just to get started, the platform accelerates time to value and time to answer by making distributed data immediately usable for enterprise AI.
 

Reduce Data Copies. Hammerspace defeats data gravity by continuously cataloging distributed data in place, then using its Model Context Protocol (MCP) server to coordinate with NVIDIA and other AI tools and applications so only the data that’s needed moves, when it’s needed. With policy- and security-driven automation managing placement and flow end to end, vectors and source data stay continuously synchronized with consistent governance, compliance and performance. This allows pilot programs to scale cleanly into production with operational simplicity.

Image 1: The Hammerspace AI Data Platform: Seamless Access to Distributed Enterprise Datasets
 

Delivered and Validated by SHI, the Premier Experts in AI Transformation
 

SHI has been a key partner in the development and testing of the Hammerspace AI Data Platform solution, using its AI and Cyber Lab to quickly showcase the value and integrations across technologies for enterprise-scale AI factories.
  

Full End-to-End Solution on Cisco UCS with Secuvy DSPM

Hammerspace also delivers solutions that meet enterprise demands across the spectrum by combining best-of-breed technologies from its ecosystem partners. To provide organizations with a complete, validated and secure AI infrastructure, Hammerspace has established key partnerships and achieved major integration milestones.
 

All-in-One Orchestration: Hammerspace collapses as many as 15 disconnected tools for data discovery, cataloging, classification, policies and movement into a single orchestration layer, providing unified data insight, management and access. The platform is also the first to deliver a fully agentic data foundation, intelligently managing data placement and flow based on real-time demand.
 

  • NVIDIA Partnership: The Hammerspace AIDP is built on NVIDIA’s reference design, ensuring optimal performance and compatibility with accelerated computing platforms, including NVIDIA RTX PRO 6000 and RTX PRO 4500 Blackwell Server Edition GPUs. Using NVIDIA AI Enterprise software, including NIM microservices and NeMo Retriever, Hammerspace converges data management with data orchestration across heterogeneous storage to simplify and automate the data pipeline and deliver the security, governance and content indexing required for high-performance inference, retrieval-augmented generation (RAG) and agentic AI.
      
  • Secuvy DSPM Integration: The Hammerspace AIDP is integrated with Secuvy’s Data Security Posture Management (DSPM) technology, providing customers with an end-to-end solution that prepares and delivers AI-ready data while ensuring continuous security monitoring, compliance, and governance throughout the entire data pipeline. 

Hardware Platform Flexibility: Hammerspace’s software-defined architecture provides the ultimate flexibility for the modern enterprise. Our AIDP can be delivered on a broad ecosystem of industry-leading hardware from partners including Cisco, Lenovo, and Supermicro. It seamlessly integrates with any server environment that meets performance specifications, ensuring organizations can leverage their preferred infrastructure without compromise.
 

Availability and More Information

Hammerspace will feature its AI Data Platform in Booth #7040 at NVIDIA’s GTC 2026, March 16-19, in San Jose, California. 

The solution is immediately available. Customers can contact their Hammerspace sales representative or authorized partners to operationalize their data for AI success.

Learn More

Hammerspace and Secuvy Partner to Make At-Scale Data AI-Ready, Fast and Safe, Across On-Premises and Cloud

Posted in Commentary with tags on March 10, 2026 by itnerd

Hammerspace, the high-performance data platform for AI anywhere, today announced a partnership with Secuvy to deliver a “Data-First” approach that turns raw data into secure AI outcomes. Together, the companies unify distributed unstructured data into a global namespace and continuously discover, classify, catalog, and control it across on-premises and cloud. 

Enterprise AI is hitting a hard wall, not just with compute demands, but also due to data sprawl and rising costs with no proven ROI. Unstructured data is fragmented across edge sites, legacy NAS systems, high-performance file systems, object stores and multiple clouds, often governed inconsistently. AI pipelines amplify risk by pulling from large, diverse datasets that may include confidential information. Without continuous discovery and classification, organizations risk exposing sensitive data in AI pipelines, losing track of what was used, and missing high-value insights. 

Together, Hammerspace and Secuvy keep data continuously AI-ready as it changes, so governance and access controls stay current from PoC to production.

  • Hammerspace provides the performance and orchestration layer so AI pipelines can reach distributed file and object data in place and move only what’s needed to the right compute at the right time.
     
  • Secuvy adds the intelligence layer, continuously identifying sensitive data and associated risks so privacy and governance controls can be applied consistently across hybrid and multi-cloud environments.

mage: The Integration of Hammerspace and Secuvy: A Data-First Model that Makes Data AI-Ready

Benefits of Hammerspace and Secuvy Partnership

Hammerspace and Secuvy enable a true Data-First model that makes data AI-ready. The integrated platform understands what the data is, where it lives, and the risk it carries, then controls how it’s used and where it can move, without forcing enterprises to rearchitect projects. Copying data drives up costs and increases risk: when data is duplicated across systems, governance breaks down and auditing, tracking, and securing it becomes difficult, allowing sensitive data to slip into AI pipelines without clear lineage or policy enforcement.

With the Hammerspace + Secuvy “Data-First” integration, organizations can make data AI-ready and enable:
 

  • One Global View – Unify distributed unstructured data into a global namespace across edge, on-premises, and multi-cloud
  • Sensitive Data Visibility – Continuously discover and classify sensitive data (PII/PHI/financial/IP) across file and object stores before it enters AI pipelines
  • Policy-Controlled Access – Catalog and control data in place using policies based on data attributes and risk
  • Continuous Compliance – Maintain consistent security and audit controls as data moves across sites and clouds—without copy-first silos
  • Just-In-Time Data – Move only what’s needed, when it’s needed, with intent-based data movement to compute
  • Use What You Have – Leverage existing storage as the foundation and free data to be processed wherever GPUs are available


Learn More:

Guest Post: Why SK Square Invested in Hammerspace: Data Orchestration for AI at Global + Sovereign Scale

Posted in Commentary with tags on March 4, 2026 by itnerd

By Molly Presley, Senior Vice President of Global Marketing at Hammerspace

AI infrastructure has hit a new hard limit and it’s not compute, it’s data. As organizations scale training and inference, the bottleneck has increasingly become the ability to find, govern, and deliver the right data to the right GPUs fast enough – across sites, clouds, and jurisdictions.

That’s why TGC Square, the overseas investment arm referenced in SK Square’s announcement this week, invested in Hammerspace: to back a platform purpose-built to eliminate data fragmentation and data-path friction, all while making sovereignty enforceable in the real world.

In the AI era, performance isn’t limited by how many GPUs you can buy; it’s limited by whether data can reach those GPUs fast enough. That’s why SK Square invested in Hammerspace: to back a data orchestrator that can logically unify distributed data and then move the right data to available GPUs without interrupting access. In a world where datasets span sites, clouds, and jurisdictions, orchestration is how you turn fragmented storage into an AI-ready data plane – globally and in sovereign environments.

AI Needs Distributed Data
AI pipelines don’t stay neatly inside one storage system, data center, or geographic location. Data is created in one place, enriched in another, and consumed wherever GPU capacity exists. The common “fix” is to duplicate datasets into new AI silos per region or per cluster.

That approach creates a familiar failure mode:

  • More copies → more drift
  • More silos → more policy gaps
  • More manual governance → more operational risk
  • More storage sprawl → more cost and slower AI cycles

Global namespace + orchestration is what makes Sovereign AI real: one consistent view of data everywhere, with policy-driven control over where each file can live, move, and be computed on, so data stays where it must, access is provable, and AI runs at full speed.

The Basis for the Investment 
Hammerspace addresses the modern AI constraint with data orchestration within a global namespace that turns fragmented data sets into a unified data estate – across distributed environments while staying within sovereign boundaries. Our unique data platform can:

  • Orchestrate data in place by indexing and leveraging file metadata, so teams can use distributed datasets without disruptive migrations or creating new storage silos.
  • Orchestrate access through a global namespace so users and applications see one consistent view of data across on-premises, multi-site, and cloud environments.
  • Orchestrate policy-driven outcomes so data movement, placement, performance, durability, and compliance behaviors are automatically enforced, and continuously re-evaluated as infrastructure and requirements change.

AI Data Access: Controlled Participation, Not Copied Isolation
AI only delivers value when the right data can be found, accessed, and delivered to GPUs quickly.  This is more complex, requires more humans, and is much slower when that data is spread across sites, clouds, and storage systems. The instinctive response is to copy everything into a dedicated “AI zone” or per-cluster silo. That creates delays, duplicate datasets, and governance drift.

Hammerspace takes a different approach: one global namespace for access, paired with policy-driven orchestration that determines what data can be used where, by whom, and under what conditions—at file granularity. Teams get the speed and simplicity of local access, without forcing new silos or breaking the guardrails that matter in regulated and sovereign environments.

Proven in Demanding Environments
Hammerspace has been adopted in high-scale, high-performance environments, including top-tier customers referenced in the announcement such as Meta and Los Alamos National Laboratory, where data bottlenecks pose an existential threat to productivity and compute ROI.

And it’s driven by deep systems expertise: David Flynn previously founded Fusion-io, which was acquired by SanDisk — experience that shows up in a platform built to remove I/O friction instead of adding new layers of overhead.

The Flynn Factor
Hammerspace was founded and is led by Flynn, a first mover who sees what infrastructure must become next and builds it before the market even has the language for it. Flynn invented the PCIe flash model at Fusion-io (the NVMe precursor) and sold it to SanDisk. Hammerspace is the next act: a global namespace data orchestrator engineered to remove I/O friction and feed GPUs at full speed—without creating new silos or breaking sovereignty.


Hammerspace Promotes Tony Asaro to Lead Sales and Business Development Organization 

Posted in Commentary with tags on January 26, 2026 by itnerd

Hammerspace today announced the promotion of Tony Asaro to Chief Business Officer. In this expanded role, Asaro will lead Hammerspace’s global revenue organization — including sales, alliances, channel and go-to-market strategy — to meet rapidly growing demand from enterprises, governments, hyperscalers and Neoclouds to build AI infrastructure and data strategies around data sovereignty, high-performance training and agile inference. 

Asaro previously led Hammerspace’s strategy and alliances teams, driving revenue and market expansion through technology partnerships spanning cloud platforms, systems providers and GPU ecosystem leaders. His appointment reflects increasing market demand for infrastructure architectures that deliver high-performance storage to feed GPUs wherever they are — across sovereign regions, on-premises environments, and public cloud — supporting production inference and agentic AI without compromising compliance or operational simplicity. 

Alliance Momentum: Oracle Highlights Hammerspace for Sovereign + Hybrid AI 

Hammerspace’s expanding partner momentum was recently underscored by Oracle highlighting the Hammerspace/Oracle OCI Dedicated Region.  Enterprises can deploy OCI services inside their own data centers to meet sovereignty requirements — and use Hammerspace as a unified, policy-driven data layer to present a global namespace and orchestrate data placement across sites and clouds based on performance, cost and compliance. 

This combination supports regulated, hybrid AI strategies by enabling teams to run compute near data, reduce unnecessary movement, avoid unmanaged copy sprawl and accelerate AI pipelines that demand consistent, high-performance data access. “The result,” says author Riley Burdon, “is an operating model that can help address residency requirements, simplify hybrid operations, and let you run AI where your data lives — without proliferating unmanaged copies or rewriting workflows.” 

Continuous Sales Momentum and Coverage 

Hammerspace enters 2026 with strong sales momentum, driven by strategic partner expansion, substantial VAR channel growth (with just under 200 resellers), and international expansion. Over the past year, the company launched its Asia headquarters in Singapore and scaled engagement across China and South Korea, while building new regional coverage for India and the Middle East from Dubai—extending field capacity, partner reach, and customer delivery for sovereign AI and GPU-intensive deployments. 

CRN Recognizes Hammerspace for AI Training and Inferencing Performance on 2026 Cloud 100 List

Posted in Commentary with tags on January 12, 2026 by itnerd

Hammerspace today announced it has been named to the 2026 CRN® Cloud 100 list by CRN®, a brand of The Channel Company. The annual list includes the most innovative channel-focused cloud technology companies transforming how enterprises deploy and scale cloud infrastructure. 

Hammerspace was recognized as it shapes the market for how organizations run AI training and inference with the cloud. Its data platform delivers Tier 0 storage performance to speed AI results, then automatically transitions data to cost-efficient object storage once demand subsides.

Hammerspace software was purpose-built to operate across on-premises, cloud and hybrid environments, allowing enterprises to move data to compute wherever GPUs are available.

This architecture makes Hammerspace ideal for organizations that need to:

  • Maximize GPU efficiency during AI training or inferencing
  • Avoid permanent costs for large high-performance cloud storage pools
  • Maintain open, standards-based architectures

Tier 0: Maximum Performance, Without Permanent Cost

Hammerspace’s Tier 0 delivers direct, NVMe-class storage performance to GPUs, eliminating the I/O bottlenecks that commonly stall GPU pipelines in the cloud. Unlike traditional cloud storage models that force customers to pay premium prices for ongoing high-performance storage, Hammerspace enables a dynamic, workload-aware approach. This allows organizations to:
 

  • Run AI workloads in the cloud on GPU clusters with Tier 0 data storage performance
  • Sustain full GPU utilization with parallel, high-throughput data storage access
  • Automatically orchestrate data movement of job outputs to object storage
  • Improve cloud economics without sacrificing performance

The result is faster AI, higher GPU cluster efficiency, and dramatically lower cloud storage costs.

How Hammerspace’s Data Platform Works

Image: The Hammerspace Data Platform provides a single, unified namespace that spans existing on-premises storage and cloud resources, giving users and applications a single, secure way to see and access data across storage types, clouds and multiple sites.
 

1. AI Workload Starts – Tier 0 Becomes Engaged: Data is delivered directly to cloud GPUs using Tier 0 NVMe-class performance, eliminating I/O bottlenecks and keeping GPUs fully utilized.

2. Workload Completes, Hammerspace Orchestrates Data Movement: Hammerspace’s Data Platform automatically transitions outputs to object storage, where cost-efficient, scalable capacity makes economic sense.

3. Unified Namespace = No Silos, No Rewrites: Applications see a single global namespace across on-premises and cloud environments, which means no application changes, no manual data movement, no vendor lock-in.

4. Repeat On-Demand: When demand spikes again, data is instantly staged back to Tier 0 for performance — without permanent high-performance cloud storage infrastructure costs.

CRN’s Cloud 100 companies demonstrate dedication to supporting channel partners and advancing innovation in cloud-based products and services. The list is the trusted resource for solution providers exploring cloud technology vendors that are well-positioned to help them build cloud portfolios that drive their success.

In 2025, Hammerspace launched several new campaigns and resources to help its partner community drive success, including extensive cloud marketplace and enablement resources, new distribution models, an expanded partner portal and continued growth of its global team. In addition, the international “Hammerspace Partner Roadshow 2025: AI Anywhere” equipped Hammerspace partners with critical insight, tools, and connections to accelerate their AI businesses.

Hammerspace Breaks IO500 Barriers: First Standards-Based Linux + NFS System To Achieve True HPC-Class Performance

Posted in Commentary with tags on November 21, 2025 by itnerd

 Hammerspace has announced a breakthrough IO500 10-Node Production result that establishes a new era for high-performance data infrastructure. For the first time, a fully standards-based architecture — standard Linux, the upstream NFSv4.2 client, and commodity NVMe flash — has delivered a 10-node Production fully reproducible IO500 result traditionally achievable only by proprietary parallel filesystems.

This result is the first IO500 Production benchmark demonstrating indisputable proof that standards-based Linux and NFS can meet the extreme performance requirements of high-performance computing (HPC) and artificial intelligence (AI) workloads — without proprietary client software, specialized networking stacks or complex parallel filesystem infrastructure.

A Milestone Moment for Data Platforms — As Transformative as Linux Was for Compute

In the late 1990s, researchers like Dr. David Bader, Distinguished Professor and founder of the Department of Data Science in the Ying Wu College of Computing and Director of the Institute for Data Science at New Jersey Institute of Technology, transformed the HPC world by proving that Linux-based clusters, built on Linux and on commodity components, could rival proprietary supercomputers. That work transformed HPC architecture then, and machine learning (ML) and AI architectures of the future, ultimately making Linux the standard powering nearly every powerful compute environment on Earth. 

This vision laid the foundations for the AI architectures that are emerging even today. Hyperion Research estimates that “over $300 billion in revenue has been generated from selling supercomputers. This represents a sizable economic gain, especially since the use of these systems generated research valued at least ten times over the purchase price. While it is difficult to fully measure the value that supercomputers have generated, even looking at just automotive, aircraft, and pharmaceuticals, supercomputers have contributed to products valued at more than $100 trillion over the last 25 years.”

Hammerspace’s IO500 achievement represents the next chapter of that evolution, this time in the data layer.

Just as Linux revolutionized compute architecture, the combination of standards-based Linux and pNFS is now proving it can revolutionize high-performance data architecture for HPC and AI.
 

The First Architecture That Meets the Demands of Both HPC and AI

This achievement marks the industry’s proof that open, interoperable infrastructure can deliver the performance required by AI and HPC workloads without proprietary lock-in.

HPC environments have traditionally relied on deep institutional expertise to operate complex proprietary filesystems, but the rapid rise of AI has changed the landscape. AI is scaling far faster — across enterprises, cloud providers, sovereign AI platforms, service providers and thousands of new data-intensive applications — and it is impossible to meet this demand with architectures that require niche expertise to deploy and maintain. Every systems administrator already knows how to operate Linux and NFS; however, very few have the specialized knowledge required for legacy parallel file systems. As AI infrastructure becomes mainstream, organizations need HPC-class performance delivered through tools and protocols familiar to the broader IT community. This IO500 result proves that the performance required for both HPC and AI can now be achieved using standard Linux, standard NFS and widely understood operational models, finally aligning extreme performance with the scale and accessibility the AI industry demands.

Standards-Based Architecture, Industry-Leading Performance

The submission by Samsung, leveraging the Hammerspace Data Platform, achieved the fastest standards-based IO500 10-Node Production result ever recorded. Hammerspace not only contributes a substantial number of the capabilities into Linux for pNFS workloads, but its Data Platform is engineered and designed from the ground up to capitalize on these upstream performance enhancements in the Linux kernel.

Unlike traditional storage platforms and legacy parallel file systems that treat Linux as a compatibility layer or pNFS as an added-on interface, Hammerspace’s architecture is built directly on top of — and actively contributes to — the same NFSv4.2 and pNFS innovations driving modern HPC and AI performance. This deep alignment uniquely allows Hammerspace to take immediate advantage of new capabilities such as lower-latency I/O paths, advanced client-side parallelism and improved failover logic, translating Linux’s ongoing advancements directly into real-world application speedups. As a result, organizations can benefit from cutting-edge performance improvements in standard Linux distributions without deploying proprietary clients or rearchitecting their infrastructure.

Unlike legacy parallel file systems that rely on complex, vendor-specific clients, the Samsung’s Hammerspace submission used: 

  • Standard RHEL/Ubuntu Linux
  • Standard upstream NFSv4.2 (pNFS) client
  • Standard NVMe SSDs from Samsung
  • Standard IP-over-InfiniBand
  • Standard server platforms
  • Hammerspace’s standards-based parallel global file system leveraging the pNFS client

In the submission, there was no proprietary client, no custom kernel modules and no exotic parallel file system used.

Modern HPC and AI workloads can now run at elite speeds using standards-based infrastructure and data architectures.

Upstream Linux Innovation Unlocks New Performance

The step-function improvement achieved during the time between the ISC25 and SC25 events is the result of:

  • Enhanced pNFS Flexible File layout parallelism
  • Upstream NFS client improvements contributed by Hammerspace
  • Upstream NFS server improvements that avoid page cache contention, allowing improved sustained performance and reduced resource utilization, contributed by Hammerspace
  • File-level objective-based policy optimizations
  • Latency reductions and throughput gains in metadata access
  • High-performance NVMe data placement managed through the Hammerspace global file system

These enhancements strengthen the entire Linux ecosystem — echoing the transformative Linux HPC contributions of the late 1990s and early 2000s.

Hammerspace’s Top-10 IO500 performance is more than a benchmark victory. It is the first empirical proof that standards-based Linux and NFS can power high-performance data systems at the top of the HPC and AI stack.

Linux democratized supercomputing and now standards-based data infrastructure is positioned to democratize high-performance storage — and reshape the future of global-scale computing.

Vanderbilt Advanced Computing Center for Research and Education (ACCRE) Selects Hammerspace to Power Next-Generation Research Data Infrastructure

Posted in Commentary with tags on November 20, 2025 by itnerd

Hammerspace today announced that the Vanderbilt Advanced Computing Center for Research and Education (ACCRE) at Vanderbilt University has selected Hammerspace to modernize its research data infrastructure.

ACCRE, Vanderbilt’s campus-wide HPC resource and research support facility, provides advanced computing and storage services for faculty and students across disciplines ranging from genetics and physics to engineering and social sciences. With a mission to “explore and benefit from the new world of computing,” ACCRE enables researchers to run large-scale simulations, data analyses and machine learning models critical to advancing discovery.

To meet growing data demands across hundreds of research projects, ACCRE sought a more flexible and cost-efficient approach to managing petabytes of research data. Historically, ACCRE has operated separate systems for primary and archive storage, including Panasas, GPFS and LStore. ACCRE wanted a solution that could unify its diverse storage tiers, leverage commodity hardware and dynamically provision storage resources across compute and GPU nodes.

After evaluating many vendors, ACCRE selected Hammerspace to deploy a 10-petabyte environment integrating CPU/GPU server-local storage for Tier 0 performance, newly purchased commodity storage servers for Tier 1, and multi-petabyte archival capacity from its existing LStore environment, all under a single global namespace.

By adopting Hammerspace in combination with LStore, ACCRE expects to reduce its average cost of storage by 48% while providing faster, more flexible data access to the Vanderbilt research community. The Hammerspace Data Platform’s open architecture aligns with LStore’s key characteristics to use commodity hardware instead of proprietary storage appliances, improving flexibility and reducing vendor lock-in.

Hammerspace Recognized for Third Consecutive Year as “Editors’ Choice” in 2025 HPCwire Readers’ and Editors’ Choice Awards

Posted in Commentary with tags on November 18, 2025 by itnerd

Hammerspace, has been recognized among the “Editors’ Choice: Top 5 New Products or Technologies to Watch” for its outstanding efforts and accomplishments in HPC and AI, in the 22nd edition of the HPCwire Readers’ Choice Awards, presented at the 2025 International Conference for High Performance Computing, Networking, Storage, and Analysis (SC25), in St. Louis, Missouri.

This honor marks the third consecutive year Hammerspace has been recognized with an Editors’ Choice award by HPCwire. In 2024, Hammerspace was selected as “Editors’ Choice: Top 5 Vendors to Watch,” and in 2023, it was awarded “Top Five New Products or Technology” for its outstanding achievements and innovation. Winners of the Editor’s Choice awards are selected by a panel of HPCwire editors and thought leaders in HPC and constitute prestigious recognition from the HPC community.

Hammerspace’s Data Platform unifies unstructured enterprise data across diverse storage architectures, geographies, and protocols, enabling organizations to convert raw data into AI-ready intelligence with unprecedented speed. As a result, organizations achieve AI-driven outcomes faster, driving innovation and competitive advantage.

Traditional AI storage infrastructure requires moving or duplicating massive datasets to specialized silos, creating fragmentation between users, applications and storage systems. Hammerspace eliminates this challenge by providing a single global namespace that spans on-premises and cloud resources. By leveraging existing infrastructure and scaling seamlessly with growing needs, the platform delivers a robust foundation for the intersection of classical HPC and new AI workflows, including training, inference, Retrieval-Augmented Generation (RAG), complex agentic workflows and the emerging era of physical AI.

To schedule a meeting with Hammerspace executives during SC25, click here.