Scan AI

Get in touch with our AI team.

NetApp AI Storage Solutions

Powered by ONTAP AI software, designed to manage, and protect your data

 

AI Storage by NetApp

ONTAP AI is a reference architecture design from NetApp combining AFF A-series storage arrays, NVIDIA DGX AI appliances linked by NVIDIA Networking switches - delivering the highest performance, the lowest latency and fastest throughput connectivity. These elements have been tested and optimised together and certified by NVIDIA to ensure a seamless AI training experience, offering a host of advantages.

Key Features

tested icon

Thoroughly Tested

As the ONTAP AI architecture is thoroughly tested, deploying these solutions reduces challenges and the unknown.

tested icon

Accelerate Results

NetApp AFF A-series systems keep data flowing to GPUs with fast and flexible all-flash storage, featuring end-to-end NVMe technologies.

tested icon

Scale Independently

The flexibility of the solution allows DGX compute and AFF storage to scale as required, independent of each other but with consistency.

tested icon

Cost Effective

The range of AFF A-series storage solutions creates a range of capacity and cost entry points when designing and ONTAP AI reference architecture.

ONTAP AI Cluster Configurations

At the heart of the NetApp ONTAP AI architecture are NVIDIA DGX H100 systems, supporting the highest performing training and inferencing a single platform. Each DGX H100 system is powered by eight NVIDIA H100 Tensor Core GPUs and features high-speed NVIDIA ConnectX-7 interconnects. These link to NetApp AFF A-series storage arrays, capable of feeding data to the DGX systems up to 4 times faster than competing solutions. The solution is completed with NVIDIA Spectrum Ethernet switches, which provide the low latency, high density, high performance, and the power efficiency demanded by AI environments.

netapp diagram

The NetApp AFF Storage Array Range

ONTAP AI is a reference architecture design from NetApp combining AFF A-series storage arrays, NVIDIA DGX H100 Universal AI appliances and NVIDIA Mellanox Ethernet switches for fast connectivity. These elements have been tested and optimised together ensuring a seamless AI training experience, and offers a host of advantages including the following.

  •   A massive throughput of up to 300Gbps and 11.4 million IOPS in a 24-node cluster
  •   30TB solid-state drives (SSDs) with multi-stream write (MSW)
  •   100Gb Ethernet together with 32Gb FC connectivity
  •   High density with 2PB in a 2U drive shelf
  •   Scaling from 364TB (2 nodes) to 74PB (24 nodes)
  •   ONTAP 9.8, with a complete suite of data protection and replication features

AFF A-Series systems, from entry-level to high- end, support end-to-end NVMe technologies, from NVMe-attached SSDs to front-end NVMe over Fibre Channel (NVMe/FC) host connectivity. These systems deliver the industry’s lowest latency for an enterprise all-flash array, making them a superior choice for driving the most demanding workloads and applications. With a simple software upgrade to the modern NVMe/FC SAN infrastructure, you can drive more workloads with faster response times, without disruption or data migration.

netapp ontap server netapp ontap server netapp ontap server netapp ontap server
AFF A900 AFF A800 AFF A400 AFF A250
Best For: The most demanding workloads requiring ultra-low latency Performance-driven workloads and mission-critical applications requiring high resiliency Most enterprise applications that require best balance of performance and cost Mid-size business and small enterprises that require simplicity and best value
Maximum Scale Out 24 nodes 24 nodes 24 nodes 24 nodes
Maximum SSD 5760 2880 5760 576
Maximum Effective Capacity 702.7PB 316.3PB 702.7PB 35PB
Software ONTAP Enterprise ONTAP ONTAP ONTAP
Controller Chassis Form Factor 8U 4U 4U 2U