HPE Cray ClusterStor E1000 Storage Systems
Leverage a new parallel HPC storage system that was purpose-engineered for the era of converged simulation and AI workloads
Click here to jump to more pricing!
Please Note: All Prices are Inclusive of GST
Overview:
The Cray ClusterStor E1000 storage solution is purpose-engineered to meet the demanding input/output requirements of supercomputers and HPC clusters in a very efficient way. The E1000 parallel storage solution typically achieves the given HPC storage performance requirements with significantly fewer storage drives than alternative storage offerings. That means for HPC users with a fixed budget for the HPC system can spend more of their budget on CPU/GPU computes nodes accelerating time-toinsight. The Cray ClusterStor E1000 storage solution embeds the open source parallel file system Lustre® to deliver this efficient performance. Hewlett Packard Enterprise provides enterprise-grade customer support in-house - and scales out (nearly) linearly, without software licensing for the file system per terabyte capacity or per storage drive. This allows customers to reap the benefits of the open source community while getting enterprise-grade support from Hewlett Packard Enterprise.
The next-generation ClusterStor storage system, ClusterStor E1000, is designed on NVMe generation 4 hardware building blocks based on the latest storage technologies to provide enhanced flexibility and resiliency. It is an I/O and storage subsystem consisting of a global POSIX single name space file system that is configured to provide access between compute clients and ClusterStor E1000 storage nodes using the Lustre parallel file system. ClusterStor E1000 offer three basic configurations:
- All Flash
- All HDD
- Tiered with both HDD and flash
The all flash configuration is suitable for lots of small, random read I/O, high sequential performance required in many AI applications including Machine Learning and Deep Learning. The base rack provides up to 1,440 GB/s read and up to 900 GB/s write with Multi-Rail LNet. The expansion rack provides up to 1,600 GB/s read and up to 1,000 GB/s write Multi-Rail LNet.
The all HDD configuration is used mainly for large sequential I/O and modeling and simulation. The base rack provides up to 90 GB/s read and write up to approximately 7.5 PB of usable capacity, depending on the HDD capacity used. The expansion rack provides up to 120 GB/s read and write up to 10PB of usable capacity, depending on the HDD capacity used.
Customers can mix and match the disk and flash building blocks in a tiered configuration to tailor to their workloads.
- Up to 80/50 GBs Read/Write aggregate file system throughput from 24 NVMe Gen 4 SSDs in a 2 rack unit form factor.
- Up to 30/30 GBs Read/Write aggregate file system throughput from 212 7.2K RPM SAS HDDs utilizing ultra-dense enclosures in a 10 rack unit form factor.
- Up to 40/40 GBs Read/Write aggregate file system throughput from 424 7.2K RPM SAS HDDs utilizing ultra-dense enclosures in an 18 rack unit form factor.
- Up to 24/24 GBs Read/Write aggregate file system throughput from 168 7.2K RPM SAS HDDs utilizing high-density enclosures in a 12 rack unit form factor.
- Data-at-rest-encryption options for all configurations.
- More than one terabyte per second aggregate file system performance in a single rack.
- More than 10 petabyte usable storage capacity in a single rack (using 16TB HDDs)
- Attaches to any supercomputer or HPC cluster that support either Cray Slingshot, HDR/EDR InfiniBand or 200/100 Gigabit Ethernet.
- Inhouse, enterprise-grade support for performance-accelerating Lustre features like Data on Metadata Targets (DoM), Progressive File Layout (PFL), Multi-Rail LNet, Distributed Name Space, and many more advanced Lustre features .
Features:
Performance Efficiency
The Cray ClusterStor E1000 Storage System uses purpose-engineered NVMe PCIe 4.0 storage controllers that can extract more performance from the deployed storage drives than other storage solution. That means less enclosures, less racks, and less power/floor space consumption.
With up to 80/50 Read/Write aggregate file system throughput from 24 NVMe Gen4 SSDs in a 2-rack unit form factor with the Scalable Storage Unit SSU-F. The Cray E1000 Storage System provides a very efficient way to deliver 80/50 GB/sec to your compute nodes from just 24 SSDs.
Up to 30/30 Read/Write aggregate file system throughput from 212 7.2K RPM SAS HDDs in a 10-rack unit form factor with the Scalable Storage Unit SSU-D2, an ultra-dense, cost-effective way to deliver 30 GB/sec from just 212 HDDs to your compute nodes.
Up to 24/24 Read/Write aggregate file system throughput from 186 7.2K RPM SAS HDDs in a 12-rack unit form factor with the Scalable Storage Unit SSU-M2, a datacenter friendly way to deliver 24 GB/sec from just 186 HDDs to your compute nodes.
Due to the open source parallel file system, no license charges are incurred for the file system per terabyte capacity or per storage drive. This enables you to remove software audit risk and cope with the constant growth in required capacity without exploding software licensing costs.
Reduce Complexity
The Cray ClusterStor E1000 Storage System ships to the customer as a fully-integrated Lustre storage solution after being soak-tested in the factory, decreasing time to results.
ClusterStor Manager is a single-system image management within the system management application that is provided with every system at no additional charge, allowing you to monitor and manage the different components of a Lustre storage system from a single pane of glass.
Users of supercomputers and HPC clusters from Hewlett Packard Enterprise can now get enterprise-grade support for their whole mission- or business-critical system (HPC compute and HPC storage) with one number to call, without the typical finger-pointing of different vendors during issue resolution.
Flexibility and Choice
The Cray ClusterStor E1000 Storage System attaches to every HPC compute system of any HPC compute vendor as long as the compute system supports either HPE Slingshot, HDR/EDR InfiniBand, or 200/100 Gigabit Ethernet, enabling the deployment of one shared storage system for all HPC clusters.
Mix and match both all-flash and HDD Scalable Storage Units in the same file system allows you to tailor the solution to the workload mix (simulation, artificial intelligence, high-performance data analytics) running on the compute nodes, enabling infrastructure consolidation of silos.
Choose between datacenter friendly SSU-M utilizing 5U84 disk storage enclosure or maximum capacity with SSU-D utilizing 4U106 ultra-dense disk storage enclosure.
Choose between read-intensive (RI) or mixed-use (MU) SSDs in the E1000's SSU-F and/or MetaData Unit (MDU).
Choose between shipment of the system from the factory in Cray ClusterStor racks or installation in your data center racks, as long as they meet technical specifications.
Hardware Specs:
Example Cray ClusterStor E1000 - Open (doors removed)
Cray ClusterStor E1000 Building Blocks
ClusterStor E1000 is configured from storage building blocks along with surrounding network, rack and power infrastructure. These building blocks are:
Cray ClusterStor E1000 System Management Unit (SMU)
The ClusterStor E1000 system management unit contains a pair of embedded storage management nodes with SSDs to hold configuration and logging data. The SMU delivers Lustre and ClusterStor system management services. ClusterStor system management is required for managing hardware configuration, software images, boot up of the underlying hardware system (servers, devices, software stack), monitoring system metrics and system health, and reporting on system status. The SMU runs a web and CLI server that allows the administrator to monitor, configure, and administer the file system, the ClusterStor management framework, and ClusterStor hardware.
Cray ClusterStor E1000 MetaData Unit (MDU)
ClusterStor E1000 MDUs stores and managers global Lustre metadata with a pair of Lustre metadata servers (MDS) and flashbased metadata targets (MDTs). An MDU is populated with 24 SSDs, two partitions with approximately half the capacity are each configured and are assigned to the two embedded controllers as MDTs using LDISKFS. MDUs can be configured with multiple MDTs, with each controller configured with one or two high speed networks adapters, depending on required network resiliency per MDS.
ClusterStor E1000 configures MDUs to support Lustre DoM, which allows small files or initial portions of files to be stored with their metadata, improving small file performance, stat performance to small files, and reducing interference between small file I/O and streaming I/O. The metadata tier stores metadata and optional DoM in RAID volumes. ClusterStor E1000 can scale additional MDUs using Lustre Distributed Name Space (DNE) to match specified performance targets.
Cray ClusterStor E1000 Scalable Storage Unit - All Flash Array (SSU-F)
ClusterStor E1000 SSU-F provides flash-based file I/O data services and network request handling for the file system with a pair of Lustre object storage servers (OSS) each configured with a one or more Lustre object storage target(s) (OSTs) to store and retrieve the portions of the file system data that are committed to it. Two OSTs are distributed evenly between the two OSSs in each SSU-F so that both OSSs are active concurrently (that is, OSSs are active-active), each operating on its own exclusive subset of the available OSTs (that is, each OST is active-passive).
A ClusterStor E1000 SSU-F is populated with 24 SSDs. For a throughput optimized configuration, approximately half the capacity are each configured with ClusterStor’s GridRAID declustered parity and sparing RAID solution using LDISKFS. For an IOPs optimized SSU-F configuration, a different RAID scheme is used to improve small random I/O workloads. Each controller can be configured with two or three high-speed network adapters configured with Mult-Rail LNet to exploit maximum throughput performance per SSU-F. ClusterStor E1000 can be scaled to many SSU-Fs and/or cominbed with SSU-Ds to achieve specified performance requirements. ClusterStor SSU-D features are mentioned below.
Cray ClusterStor E1000 Scalable Storage Unit - Disk (SSU-D)
ClusterStor E1000 SSU-D provides HDD-based file I/O data services and network request handling for the file system with similar OSS and OST features mentioned above for ClusterStor E1000’s SSU-F.
ClusterStor E1000 has multiple SSU-D configurations with either one, two or four ClusterStor ultra-dense HDD enclosures. Each ultra-dense disk enclosure is configured with 106 SAS HDDs and contains two Lustre OSTs, each configured with ClusterStor’s GridRAID declustered parity and sparing RAID solution using LDISKFS. Each SSU-D controller is configured one high-speed network adapter and redundant SAS adapters to connect to one or more ClusterStor ultra-dense HDD enclosures. ClusterStor E1000 can be scaled to many SSU-Ds and/or cominbed with SSU-Fs to achieve specified performance requirements
Cray ClusterStor E1000 Scalable Storage Unit - Disk (SSU-M)
ClusterStor E1000 SSU-M provides HDD-based file I/O data services and network request handling for the file system with similar OSS and OST features mentioned above for ClusterStor E1000’s SSU-D
ClusterStor E1000 has multiple SSU-M configurations with either one or two ClusterStor HDD enclosures. Each high-density disk enclosure is configured with 84 SAS HDDs and contains two Lustre OSTs, each configured with ClusterStor’s GridRAID declustered parity and sparing RAID solution using LDISKFS. Each SSU-M controller is configured one high-speed network adapter and redundant SAS adapters to connect to one or more ClusterStor HDD enclosures. ClusterStor E1000 can be scaled to many SSU-Ms and/or cominbed with SSU-Fs to achieve specified performance requirements.
ClusterStor E1000 Scalable Storage Unit - Disk (SSU-D)
ClusterStor E1000 Scalable Storage Unit - Disk (SSU-M)
Management Switches
ClusterStor E1000 utilizes Aruba switches for a private management network between all building blocks in the filesystem. The private management network is deployed in as a highly-available configuration. Bepending on the size of the filesystem, either Aruba CX 6300M switches or a combination of Aruba CX 8325 and CX 6300M switches will be factory integrated. Only back-tofront (also called power-to-port or reversed air flow) switches are available with ClusterStor E1000.
Warranty
1-0-0 warranty
This product is covered by a global limited warranty and supported by Hewlett Packard Enterprise Services. Hardware diagnostic support and repair is available for one year from date of purchase. Support for software is available for 90 days from date of purchase. Enhancements to warranty services are available through HPE or customized service agreements.
Solid State Drives are subject to maximum usage and or maximum supported lifetime limitations, whichever occurs first. Maximum Supported Lifetime is the period in years set to equal the warranty for the device. Maximum usage limit is the maximum amount of data that can be written to the device before reaching the device's write endurance limit.
Services:
HPE Pointnext
HPE Pointnext Services leverages our strength in infrastructure, partner ecosystems, and the end-to-end lifecycle experience, to accelerate powerful, scalable IT solutions to provide you the assistance for faster time to value. HPE Pointnext Services provides a comprehensive portfolio including Advisory and Transformational, Professional, and Operational Services to help accelerate your digital transformation.
Operational Services
- HPE Datacenter Care: HPE’s most comprehensive support solution tailored to meet your specific data center support requirements. It offers a wide choice of proactive and reactive service levels to cover requirements ranging from the most basic to the most business-critical environments. HPE Datacenter Care Service is designed to scale to any size and type of data center environment while providing a single point of contact for all your support needs for HPE as well as selected multivendor products.
- HPE Critical Service: High-performance reactive and proactive support designed to minimize downtime. It offers an assigned support team, which includes an account support manager (ASM). This service offers access to the HPE Global NonStop Solution Center, 24x7 hardware and software support, six-hour call-to-repair commitment, enhanced parts inventory, and accelerated escalation management.
- HPE Proactive Care: Provides proactive and reactive support delivered under the direction of an ASM. It offers 24x7 hardware support with four-hour on-site response, 24x7 software support with a two-hour response, and flexible call submittal.
- HPE Foundation Care: Support for HPE servers, storage, networking hardware, and software to meet your availability requirements with a variety of coverage levels and response times.
Advisory and Transformation Services
Advisory and Transformation Services-HPE Pointnext Services designs the transformation and builds a road map tuned to your unique challenges including hybrid cloud, Workload and Application Migration, Big Data, and the edge. Hewlett Packard Enterprise leverages proven architectures and blueprints, as well as integrates with partner products and solutions. We also engage the Professional and Operational Services teams as needed.
Professional Services
Professional Services-HPE Pointnext Services creates and integrates configurations that get the most out of software and hardware, and works with your preferred technologies to deliver the optimal solution. Services provided by the HPE Pointnext Services team, certified channel partners, or specialist delivery partners include installation and deployment services, mission-critical and technical services, and education services.
Pricing Notes:
- All Prices are Inclusive of GST
- Pricing and product availability subject to change without notice.
Our Price: Request a Quote