Archive for Hammerspace

Hammerspace Announces Latest Version of its Data Platform Software

Posted in Commentary with tags on November 17, 2025 by itnerd

Hammerspace, the high-performance data platform for AI Anywhere, today announced the upcoming release of Hammerspace v5.2, delivering performance, security and ecosystem enhancements that help organizations unify, automate and accelerate their AI and high-performance workloads across any on-premises, hybrid or cloud-based infrastructure.

With v5.2, Hammerspace raises the bar on standards-based parallel file system performance, particularly for AI and HPC workloads, continuing the trajectory demonstrated in public benchmarks earlier this year. The new release achieved a 33.7% higher IO500 overall score than results on the previous version published five months ago, with total bandwidth doubling and individual sub-tests showing dramatic improvements — including an over 800% gain in IOR-Hard-Read.

A key component of these performance improvements is Hammerspace’s continued contribution of significant client-side NFS performance enhancements to the standard Linux kernel, improvements specifically designed to accelerate AI and HPC workloads. By tightly integrating Hammerspace software with these upstream kernel advancements, the Data Platform delivers dramatic performance gains without requiring customers to install proprietary software on application servers or trap their data into vendor-locked silos.

This standards-based approach means Hammerspace is compatible with any storage platform, enabling customers to adapt and deliver the performance and low latency needed for new workloads such as training, inference or RAG with existing infrastructure and data sets. This approach eliminates the cost and complexity of migrating data to net-new storage silos to launch AI projects.

To support extreme scale, v5.2 adds Share Referrals, a transparent mechanism that distributes the namespace across as many metadata servers as are needed to accommodate extreme file counts. This enhancement ensures linear scalability, so performance and responsiveness remain steady even as data estates for AI and HPC environments explode. 

The release also strengthens security options with the addition of Kerberos authentication and Labeled NFS support. By enabling SELinux and other Mandatory Access Control (MAC) systems to transport and enforce security labels across NFS, organizations gain consistent, fine-grained control over data access, which is essential for sensitive research, government and regulated industries.

Hammerspace v5.2 will further expand the platform’s reach by adding support for running Hammerspace in Oracle Cloud Infrastructure (OCI).  New shapes, including bare metal, will be supported, and support for OCI dedicated Regions will follow, providing a critical option for customers that must maintain strict data sovereignty across distributed environments.

This tight OCI integration extends Hammerspace’s multi-site, multi-cloud and multi-protocol capabilities, including its unique S3-connector technology, so customers can seamlessly bridge on-premises environments to cloud-based GPU-accelerated compute clusters in OCI, AWS and Azure. In this way, NFS-based applications gain native, transparent access to cloud compute resources without workflow changes or moving data into new silos.

This seamless hybrid cloud flexibility is what enables organizations such as Meta to burst extreme-performance AI workloads between on-premises data centers and GPU clusters in OCI, with data movement orchestrated among storage types and locations transparently in the background. At the same time, Hammerspace’s global namespace maintains consistent access for users and applications.

Availability

Hammerspace v5.2 will be generally available in December. To learn more or request early access, visit www.hammerspace.com.

In addition to the baseline performance gains, v5.2 introduces Tier 0 affinitization, adding locality-aware intelligence to Tier 0 deployments. By automatically aligning data placement with the optimal servers within a GPU cluster, Tier 0 affinitization reduces east-west network traffic to accelerate throughput and simplifies Tier 0 deployments by eliminating the need for manual configuration. The feature is automatic, transparent and enabled by default.

Hammerspace to Showcase Latest Software Release, New AI Data Platform Solution and Latest Performance Achievements at SC25

Posted in Commentary with tags on November 14, 2025 by itnerd

Hammerspace today announced it will showcase its latest capabilities at Supercomputing 2025 (SC25), taking place at the America’s Center Convention Complex in Saint Louis from November 17-20. At its booth #3523, Hammerspace will demonstrate its AI solution, aligned with the NVIDIA AI Data Platform reference design, to streamline data access for agentic AI applications. The solution enables seamless data access and orchestration across hybrid environments, ensuring that AI workloads always have instant access to the right data, without manual intervention or complex integration.

Through automated data objectives and tight integration with AI agents, Hammerspace’s platform intelligently tags, tiers and places data where it’s needed most, optimizing for both performance and cost. This automation ensures that AI models can train and infer faster, with data continuously in motion to meet the needs of high-performance computing (HPC) environments.

Hammerspace will also highlight its Tier 0 solution, which transforms the local NVMe storage within GPU clusters into a shared, high-performance storage tier. This capability delivers the ultra-low latency and high throughput demanded by AI training, checkpointing, inference and agentic AI workloads, all while maximizing existing GPU investments.

To schedule a meeting with Hammerspace executives during SC25, click here.

2026 Predictions From Hammerspace

Posted in Commentary with tags on November 13, 2025 by itnerd

Molly Presley, SVP of Global Marketing at Hammerspace is sharing her prescient insights on emerging key trends in data management, storage, and AI for 2026.

The End of Data Fragmentation as AI’s Silent Killer

In 2026, enterprises will need to confront fragmented data estates. The industry will recognize that the biggest limiter to AI adoption isn’t GPU supply—it’s data access speed, consistency, and reach. Organizations will shift investment from more compute to unified data platforms that make existing infrastructure AI-ready.

By the end of 2026, AI deployments will rely on data orchestration layers that abstract away underlying storage silos and present a single, global view of data across hybrid environments. This approach will mark the beginning of the post-storage era—where AI agents, RAG workflows, and LLMs access information anywhere it resides, without copying or migrating it.

The winners of the AI race will be those who treat data fragmentation not as a symptom to be managed, but as a core architectural flaw to be eliminated. Performance, cost efficiency, and scalability will all flow from this unification—turning “AI Anywhere” from an aspiration into the new enterprise standard.

Sovereign AI Will be a Driving Function of Infrastructure Decisions

By 2026, organizations will increasingly pivot from relying on commercial APIs to deploying AI workloads on-premises. Security, compliance, and governance concerns will drive demand for AI environments built on enterprise infrastructure rather than public APIs. This shift ensures organizations retain complete control of their data, models, and intellectual property — a priority as generative AI moves deeper into regulated and mission-critical use cases.

A Unified Data Estate Becomes the Strategic Battleground

The era of focusing solely on GPU availability is coming to an end. The real competitive advantage lies in creating unified, global data estates that can power inference and generative AI at scale. Enterprises will realize that fast storage isn’t enough — orchestrating massive, decentralized, unstructured data into a single global namespace is now essential. In 2026, infrastructure players who can eliminate silos across sites, storage systems, and clouds will become the most strategic players in AI adoption.

Energy and Efficiency Drive Infrastructure Innovation

The sheer scale of inference and GenAI workloads will force a reckoning with power and efficiency. By 2026, new infrastructure technologies — from smarter data orchestration layers to energy-aware storage and compute systems — will emerge as enterprises seek to manage costs and sustainability pressures. We expect infrastructure vendors to compete not only on speed and scale, but also on their ability to tame energy consumption while maintaining enterprise-class performance.

The Year of the AI Factory — Where Efficiency Defines Intelligence (#2)

2026 will be remembered as the year AI moved from experimentation to industrialization — the dawn of the AI Factory. Across industries, organizations will shift their focus from simply training bigger models to operationalizing intelligence at scale. The frontier will no longer be just about model size, but about how efficiently those models are fed, reasoned with, and deployed.

The world’s compute capacity is now bounded by energy and data movement, not transistors. As a result, efficiency will become the new scoreboard of AI progress — measured in tokens-per-watt, throughput-per-rack, and time-to-insight. Enterprises will realize that GPUs sitting idle due to data fragmentation or latency are not just a technical problem, but an economic one.

In 2026, AI Factories will rise as the modern equivalent of industrial power plants — unifying data, compute, and automation into tightly orchestrated systems that transform raw information into actionable intelligence at unprecedented speed. These environments will blur the boundaries between cloud and on-premises, between inference and training, and between virtual and physical AI. AI Data platform exists… the AI Factory vision wasn’t possible until this technology was involved 

Exabyte Is the New Petabyte — and the Era of Open Flash Has Begun

In 2026, the scale of AI data will cross a historic threshold: exabytes will become the new unit of design for large-scale data infrastructure. Governments, hyperscalers, and emerging neocloud providers are building AI datacenters with training and inference pipelines that demand instant access to data that once would have been relegated to cold archives. The challenge is no longer just capacity — it’s how to keep exabytes of data hot, fast, and efficient within strict limits on power and floor space.

This struggle is driving a fundamental rethink of storage architecture. Traditional controller-based systems and proprietary flash arrays can’t scale linearly or efficiently enough to meet the needs of AI-driven workloads. The new frontier is open, software-defined flash platforms — architectures that embed compute directly with storage media, collapse layers of inefficiency, and operate on open standards.

The Open Flash Platform (OFP) movement embodies this shift. By unifying flash media, DPUs, and open protocols under a common, composable design, OFP enables 10–50× higher density, 90% lower power consumption, and rack-scale performance that aligns with the needs of AI factories operating at exabyte scale.

2026 will mark the beginning of a new design paradigm for AI infrastructure — where data, models, and compute are treated as one continuous system, not separate layers. Flash becomes the substrate, but the true architecture is data-centric: built around how information flows, learns, and evolves across GPU clusters. Open Flash Platform (OFP) technologies will underpin this transformation by delivering the performance, efficiency, and openness needed for exabyte-scale AI factories — where data pipelines, not storage boxes, define the architecture.

Angela Bai Joins Hammerspace as China Country Manager

Posted in Commentary with tags on November 11, 2025 by itnerd

 Hammerspace, the high-performance data platform for AI Anywhere, today announced the appointment of technology veteran Angela Bai as its China Country Manager, underscoring the company’s accelerated global expansion and commitment to one of the world’s most dynamic AI markets.

With more than 20 years of leadership experience driving strategic growth and channel development for major technology companies, including Quantum, Sun Microsystems and Impinj, Bai brings a proven track record of building high-impact teams and partnerships across China’s enterprise and hyperscale markets. She will lead Hammerspace’s operations, partnerships and customer success strategy in China, enabling organizations to harness distributed unstructured data for large-scale AI and high-performance computing (HPC) workloads.

Hammerspace entered the Chinese market earlier this year as part of its global growth strategy to make AI infrastructure more efficient and accessible. The company is seeing a surge in demand from Chinese hyperscalers and enterprises seeking to eliminate data silos and accelerate AI development with unified, high-performance data orchestration.
 

According to Morgan Stanley Research, China’s core AI industry is projected to reach $140 billion by 2030, expanding to $1.4 trillion when infrastructure and component ecosystems are included.

The Hammerspace Data Platform eliminates the need for costly infrastructure overhauls or new storage silos, enabling enterprises to seamlessly harness their existing data for accelerated AI computing.   Hammerspace, a member of the NVIDIA Inception program, unifies unstructured enterprise data across diverse storage architectures, geographies, and protocols, enabling organizations to convert raw data into AI-ready intelligence with unprecedented speed. By leveraging existing infrastructure and scaling seamlessly with growing needs, the platform delivers a robust foundation for Retrieval-Augmented Generation (RAG), complex agentic workflows, and the emerging era of physical AI. With Hammerspace, enterprises achieve AI-driven outcomes faster, driving innovation and competitive advantage.

Current open positions at Hammerspace are available on its Careers page.

Hammerspace Unveils AI Data Platform Solution to Transform Enterprise Data for the Agentic AI Anywhere Era  

Posted in Commentary with tags on October 28, 2025 by itnerd

Hammerspace, the high-performance data platform for AI Anywhere, today unveiled its solution designed to streamline enterprise data access for agentic AI applications. Aligned with the NVIDIA AI Data Platform reference design, this innovative new solution eliminates the need for costly infrastructure overhauls or new storage silos, enabling enterprises to seamlessly harness their existing data for accelerated AI computing. 

Hammerspace — a member of the NVIDIA Inception program — unifies unstructured enterprise data across diverse storage architectures, geographies, and protocols, enabling organizations to convert raw data into AI-ready intelligence with unprecedented speed. By leveraging existing infrastructure and scaling seamlessly with growing needs, the platform delivers a robust foundation for Retrieval-Augmented Generation (RAG), complex agentic workflows, and the emerging era of physical AI. With Hammerspace, enterprises achieve AI-driven outcomes faster, driving innovation and competitive advantage. 

Simplify the Data Estate Without Adding Another Storage Silo  

Traditional AI storage infrastructure requires moving or duplicating massive datasets to specialized silos, creating fragmentation between users, applications, and storage systems.  Hammerspace eliminates this challenge by providing a single global namespace that spans on-premises and cloud resources.  

Using Hammerspace’s automated data objectives and tight integration with AI agents, data is intelligently tagged, tiered, and placed in the right location at the right time — optimizing both performance and cost. This automation ensures that training and inference workloads always have immediate access to the data they need, without manual data movement or complex integration layers, enhancing and accelerating AI queries.  

Multi-protocol support for pNFS, NFS, SMB, and S3, with POSIX-compliant file access, ensures compatibility with existing enterprise applications, while maintaining instant access for users and AI systems alike.  

Accelerate and Transform Enterprise Data for the Agentic Era  

The Hammerspace Data Platform leverages the NVIDIA AI Enterprise software platform and integrates with NVIDIA accelerated computing and NVIDIA networking to deliver unmatched performance and scalability:   

At the core of the architecture, Hammerspace Tier 0 delivers better than line-rate performance by unifying NVMe inside GPU nodes to accelerate processing and maximize resource utilization. The integrated Milvus vector database and Model Context Protocol (MCP) services transform unstructured enterprise data into searchable embeddings and create seamless agents and business data. This combination enables real-time access, reasoning, and retrieval for AI agents operating across the enterprise data estate.  

Streamlined and Scalable AI Data Platform Packaging  

The Hammerspace Data Platform for AI Anywhere is delivered as a validated, easy-to-deploy solution aligned with the NVIDIA AI Data Platform reference design. It enables customers to begin with a small, project-based configuration and scale linearly as AI workloads expand.  

  • Start small: Validate AI initiatives and pilot projects.  
  • Scale linearly: Expand seamlessly to multi-site or global architectures.  
  • Channel-first: Available exclusively through strategic Hammerspace channel partners, ensuring enterprise-class deployment, support, and lifecycle services.  

Availability

The Hammerspace reference design for the NVIDIA AI Data Platform will be showcased at NVIDIA GTC in Washington, D.C. and will be available through authorized Hammerspace partners in late 2025.  

To learn more:  

Hammerspace Demonstrates Breakthrough in GPU Storage Performance at Oracle AI World 2025

Posted in Commentary with tags on October 13, 2025 by itnerd

Hammerspace today announced that it will demonstrate the power and performance of its Tier 0 architecture at Oracle AI World 2025, October 13–16 in Las Vegas. With Tier 0, Oracle Cloud Infrastructure (OCI) Supercluster – a bare metal GPU server cluster – operates with ultra-high-performance shared storage, helping to reduce bottlenecks and minimize GPU idle time.

By transforming existing local NVMe storage in OCI GPU shapes into a persistent, ultra-fast shared storage tier, Hammerspace eliminates data silos and unifies storage, unlocking a new level of efficiency and performance for AI workloads. Hammerspace will demonstrate its Tier 0 architecture, which enables AI training, checkpointing, inference and agentic AI workloads to run with higher throughput, lower latency and better GPU utilization, while providing access to all this data in a single namespace.

Performance That Redefines the Rules
Recent benchmark testing on OCI proves the power of the Tier 0 architecture, which includes:

  • Up to 7x improvement in latency vs. traditional cloud file storage.
  • Up to 6x improvement in storage performance vs. traditional cloud file storage.
  • Checkpointing at extreme speeds, crushing idle time.
  • Throughput so fast it keeps GPUs fed 24/7, not waiting on data.
  • Policy-driven flexibility to move cold data to lower-cost tiers without touching the hot path. 

In addition, Hammerspace will demonstrate the power of Tier 0 at the event:

     Topic: Increase Performance and Reduce Idle Time of Your GPU Workloads in OCI
     Presenter: 
Raj Sharma, Cloud Field CTO, Hammerspace
     Location: NVIDIA Booth #1013
     Date/Time: Tuesday, October 14, 5:30–6:00 p.m.

Learn More:

Hammerspace Wins the “Data Platform Tech — AI-Optimized Data Platforms” Category in the SiliconANGLE TechForward Awards

Posted in Commentary with tags on August 26, 2025 by itnerd

Hammerspace today announced that it has been named a winner in SiliconANGLE’s 2025 TechForward Awards in the “Data Platform Tech — AI-Optimized Data Platforms” category.

The Hammerspace Data Platform stands out from other solutions in the market with an open, data-centric architecture optimized for high-performance AI and HPC workloads, making all data an instantly accessible resource for AI models, applications, compute clusters and users. By unifying global access to all data across existing storage, silos, sites and clouds, Hammerspace enables organizations to leverage their existing infrastructure without needing to purchase net-new, proprietary storage.

Hammerspace simplifies the AI journey for customers with its ability to extend into cloud-based compute environments, without disruption to existing users/applications or requiring massive data migrations into new silos. By activating the underutilized capacity customers already own in their GPU servers, Hammerspace’s Tier 0 accelerates inferencing and checkpointing and reduces the need to purchase additional external high-performance storage.

The TechForward Awards recognize the technologies and solutions driving business forward. As the trusted voice of enterprise and emerging tech, SiliconANGLE applies a rigorous editorial lens to highlight innovations reshaping how businesses operate in our rapidly changing landscape. This awards program honors both established enterprise solutions and breakthrough technologies defining the future of business, spanning AI innovation, security excellence, cloud transformation, data platform evolution and blockchain/crypto tech. Hammerspace was selected from a competitive field of nominees by a panel of industry experts and technology leaders.

For more information, visit https://siliconangle.com/awards/.

Guest Post: Hammerspace Announces MLPerf v2.0 Benchmark Results, Demonstrates the Simplicity, Performance, and Efficiency of Tier 0 

Posted in Commentary with tags on August 7, 2025 by itnerd

Tech industry benchmarks are interesting things. Some seem designed mostly for winners to brag to their industry buddies and the press. Like a drag race, where straight-line speed in the quarter mile is all that counts. Those are fun but not really useful, because nobody lives exactly ¼ mile from the grocery store down a straight, flat, empty road. 

The benchmarks that are useful to AI and infrastructure architects are the ones that simulate real-world workloads. A little highway driving, some low speed around town stuff, trailer towing, etc. This is why we like the MLCommons® MLPerf Storage benchmark suite and are actively involved in efforts to expand and improve it. MLPerf Storage simulates a variety of realistic AI/ML workloads. The results provide relevant data points for organizations evaluating storage architectures for AI. 

Let’s review the results, then I’ll explain how they were achieved and why they matter. 

Results Summary

For this round, we ran the 3D U-Net benchmark with simulated H100 GPUs.

Note: Previous submissions and alternative benchmark configurations can be found in ML-Perf for Storage Benchmark Results technical brief. 

3D U-Net emulates a medical image segmentation workload. It’s the most bandwidth-intensive of the MLPerf Storage benchmarks, highlighting parallel I/O throughput as well as memory and CPU efficiency. Three configurations were tested, with one, three, and five Tier 0 nodes respectively. The table and graph below summarize the results. 

Tier 0 Node QuantityH100 GPUs SupportedTotal ThroughputMean GPU UtilizationCoefficient of Variation
12885.6 GB/s94.7%0.14%
384253.1 GB/s95.0%0.13%
5140420.8 GB/s96.4%0.08% 

Notice that both the number of GPUs supported and throughput scale linearly as the number of Tier 0 nodes increases. This demonstrates the full capabilities of the best case where the primary dataset can reside 100% on the host. As the scale of the cluster grows, peak performance will be dependent on the configuration of the system and the percentage of locally-resident data, but aggregate performance will continue to scale. This is an area for further exploration by our performance test team. 

Mean GPU utilization indicates the percentage of time the GPUs are being kept busy vs. waiting. To ‘pass’ the MLPerf Storage benchmark, all GPUs must be kept at 90% or higher utilization. Higher is better, since the goal is to minimize GPU idle time. 

Coefficient of variation (CV) is a measure of the difference in the results between multiple runs of the same test. The MLPerf Storage benchmark requires that each test be run multiple times, and that the results fall within a small range. This ensures that results are truly reproducible. The very low CV shown by the Hammerspace results indicates that system performance was very stable and predictable. 

Competitive Comparison – Simplicity and Efficiency Are Key

To ensure meaningful and fair comparisons, the following discussion includes only vendors who performed the 3D U-Net H100 test using on premises shared file configurations.  This graph shows the best result submitted by each vendor in terms of the number of GPUs supported: 

As you can see, Hammerspace Tier 0 delivered an excellent result, besting most of the household names on this test. But there is another way to look at this data that’s incredibly revealing and relevant – through the lens of efficiency. 

Datacenters everywhere are short on power, cooling, and often rack space. AI, with its power-hungry GPU servers, has magnified the problem. Every Watt dedicated to storage infrastructure is one that’s not available for GPUs. In short, efficiency matters. 

Actual power dissipation information is not available for the MLPerf Storage submissions, but we can use rack U as a proxy, assuming the more rack U a solution requires, the more power it will use. 

When you look at the number of GPUs supported per additional rack U of storage infrastructure, Hammerspace Tier 0 stands head and shoulders above the rest, with a result 3.7x that of the next most efficient system. 

In a real-world situation, GPU servers (represented here by benchmark clients) run AI workloads. “Additional rack U of storage infrastructure” refers to the additional space taken by the storage solution, over and above the compute servers/benchmark clients. 

Because Tier 0 aggregates local NVMe storage across the GPU servers in a cluster, the only additional hardware needed for our benchmark run was a single 1U metadata server, known in Hammerspace as an Anvil. In production installations it’s typical to run two Anvils for high availability, but even then Hammerspace would be 85% more efficient than the next best entry. 

Looking at max GB/s bandwidth reveals a similar story: Hammerspace Tier 0 is 3.7x as efficient as the next nearest entry. 

Benchmark Configuration

Here’s a diagram of the test configuration:

Clients run the benchmark code. With Tier 0, they also house the NVMe drives – 10 ScaleFlux CSD5000 drives per client, in this case. The Anvil is responsible for metadata operations and cluster coordination tasks – no data flows through it. Clients mount the shared file system via parallel NFS (pNFS) v4.2, accessing the storage directly after receiving a layout from the Anvil. 

The benchmark configuration is a bit artificial in its limited scope. Typically, Tier 0 is just one of many tiers of shared, persistent storage in a more comprehensive Hammerspace infrastructure that may include network-attached Tier 1 NVMe, object storage, and more across multiple sites and clouds. 

Why Tier 0 Matters for Enterprise AI

As enterprises contemplate AI initiatives, initial costs loom large. Computing and storage resources must be acquired and large amounts of data from across the organization must be identified, cleaned, and organized. Anything that can make it simpler to get started is valuable. That’s why with MLPerf v2.0 Hammerspace focused on our Tier 0 implementation. 

Hammerspace Tier 0 activates the NVMe storage already present across a cluster of GPU servers, bringing it into a shared, global namespace. Data placement and protection are automated using Hammerspace’s extensive data orchestration capabilities. Tier 0 even works in the cloud when it makes more sense to rent vs. buy. 

For the critical initial phases of data wrangling, Hammerspace’s assimilation capability eliminates the need to copy huge amounts of data into a net new repository before refining it. Assimilation brings existing NAS volumes into Hammerspace by scanning their metadata. The data itself stays in place. Once the relevant data is identified and prepared, it can be dynamically orchestrated onto high-performance storage like Tier 0 for processing, with results ultimately archived to a lower-cost tier.

Benefits of Tier 0 for Enterprise AI

The benefits of Hammerspace Tier 0 for Enterprise AI include:

Simplicity:

  • Get started with the storage and network infrastructure that’s already in place
  • No agent software to install
  • No special networking, just Ethernet

Performance:

  • Tier 0 storage is up to 10x faster than networked storage
  • Tier 0 increases performance both on premises and in the cloud
  • Increased GPU utilization, faster checkpoints, reduced inferencing times

Efficiency:

  • Less external shared storage needed
  • Less power, rack space, and networking vs. external shared storage
  • Faster time to value – activate Tier 0 in hours, not days or weeks

Conclusion
Hammerspace is proud of our involvement in MLCommons and the MLPerf Storage benchmark program, and we’re proud of our results. But we’re not standing still. We’ve already made additional improvements that deliver even better results – but that’s a topic for a future blog. Until then, you can learn more about Tier 0 and Hammerspace at Hammerspace.com.

Blog post announcement by:
Dan Duperron

Senior Technical Marketing Engineer

Dan Duperron is a Senior Technical Marketing Engineer at Hammerspace. After wasting his electrical engineering degree working in corporate IT, he fell down the data storage rabbit hole and has never been happier. He particularly enjoys getting other people excited about new and clever storage technology.

Hammerspace Enters Korean Market, Accelerates APAC Growth Strategy

Posted in Commentary with tags on July 8, 2025 by itnerd

Hammerspace today announced its official launch in Korea, marking another strategic milestone in the company’s rapid global growth.

Hammerspace has been accelerating its presence across the Asia-Pacific (APAC) region, following successful launches in Japan and China earlier this year. The expansion into Korea builds upon a highly successful year for the company, which recorded a 32% year-over-year increase in customer adoption and a tenfold increase in revenue across multiple regions in 2024.

The Korean launch is a key component of Hammerspace’s APAC growth strategy, as local enterprises seek data platforms which provide both performance and advanced data orchestration technologies to power artificial intelligence (AI), high-performance computing (HPC), research, and other GPU-intensive workloads.

Hammerspace plans to deliver its award-winning Data Platform to Korean organizations looking to use and manage data more efficiently across hybrid cloud environments. The platform enables seamless, instant access to data regardless of location, storage system, or vendor by creating a unified global data environment.
 

Leading the Transformation of Enterprise Data Infrastructure

Hammerspace is gaining attention for its unique data-in-place architecture, which logically integrates data across diverse, previously siloed enterprise environments using metadata without requiring manual data movement. This allows enterprises to gain full visibility and immediate access to their data, no matter where it is stored.

The Hammerspace platform combines parallel NFS (pNFS) technology with Tier-0 storage layer optimization to deliver exceptional performance. These capabilities ensure low-latency, high-speed access for data-intensive applications—including AI training, HPC workloads, and media rendering—while dramatically reducing total cost of ownership (TCO) by utilizing existing storage and network infrastructure.

Hammerspace’s platform also features a policy-driven orchestration engine that automatically relocates data to optimal locations based on workload priority, resource proximity, and usage patterns. This automation supports scalability and agility across single-site, multi-site, and hybrid cloud environments.

In terms of security and compliance, the platform integrates advanced encryption, access control, audit logging, and governance policies, helping enterprises to strengthen data protection. Real-time data access and enterprise-wide integration further accelerate AI training and inference, empowering data-driven business decisions.
 

A Strategic Market for AI and HPC Growth

According to IDC’s Semiannual Artificial Intelligence Tracker (2023), Korea’s AI market is expected to grow 12.1% year-over-year in 2025, reaching KRW 3.44 trillion. By 2027, the market is projected to reach KRW 4.46 trillion, with a CAGR of 14.3%. Major sectors such as telecommunications, manufacturing, healthcare, public sector/defense, finance, and education are expected to drive AI adoption, supported by continued investment in digital transformation. 

Global Data Platform Capabilities for Korean Enterprises

Hammerspace’s Global Data Platform provides the following key benefits:

  • Unified global data access across any storage or cloud environment
  • Automated data orchestration and workflow optimization
  • Enhanced data security and compliance capabilities
  • Seamless integration with existing infrastructure
  • Local support through strategic partnerships


The platform consolidates all enterprise data into a single parallel file system, automatically placing data near compute resources. It enables high-performance access to unstructured data, removes silos across sites and cloud environments, and maximizes operational efficiency through continuous data orchestration—ideal for AI and HPC workloads.

For more information on Hammerspace’s solutions in Korea, please visit www.hammerspace.com.

Jeff Lebold Joins Hammerspace as Vice President of APAC Region

Posted in Commentary with tags on July 3, 2025 by itnerd

Hammerspace has announced the appointment of Jeff Lebold as Vice President of the Asia Pacific (APAC) region. A veteran technology leader with nearly three decades of experience, Lebold will spearhead Hammerspace’s aggressive growth and customer momentum throughout one of the world’s fastest-growing markets for AI and data infrastructure.

Lebold brings deep expertise in sales, market development, systems engineering and marketing. He most recently served as Vice President of Sales for Asia-Pacific Enterprise Customers at Impinj, where he consistently delivered strong revenue growth. Previously, during his 22-year tenure at Quantum Corporation, he drove strategic expansion across a complex seven-country APAC territory, building high-performing cross-cultural teams and driving transformative market success. Fluent in Mandarin, Lebold is known for forging strong partnerships and scaling global operations.

Today’s enterprises face the challenge of optimizing high-performance data access for AI workloads, scaling their infrastructure efficiently, and managing complex, distributed data environments. Hammerspace’s award-winning Data Platform delivers a competitive edge across every dimension of unstructured data: storage, access, movement and deployment. Whether training thousands of GPUs on-premises or in the cloud, deploying large-scale inference or maximizing NVMe performance in local GPU servers, Hammerspace is purpose-built to unleash data performance at scale.

Current open positions at Hammerspace are available on its Careers page.