Get in touch with our AI team.
3XS HGX Custom AI Servers
Bespoke designs for the most demanding AI workloads
AI Training & Inferencing Servers by 3XS Systems
3XS custom AI servers are based on NVIDIA-certified HGX designs. Powered by eight NVIDIA SXM GPUs, they provide data scientists and researchers with a powerful platform for scaling-out AI models.
Designed for datacentre environments, 3XS AI servers are fine-tuned by our hardware engineers and data scientists for maximum performance and reliability. We provide a custom softwarestack that means you spend more time on your model training and less time configuring drivers, libraries and APIs.
CONTACT US TO DISCUSS YOUR REQUIREMENTS >Accelerated AI with NVIDIA GPUs
The NVIDIA datacentre family of SXM GPU accelerators represents the cutting edge in performance for all AI workloads, offering unprecedented compute density, performance and flexibility. Based on the Hopper architecture they feature 4th gen NVLink and 2nd gen Multiple Instance GPU (MIG) technology to deliver powerful, yet flexible GPU-acceleration. Explore the range of NVIDIA SXM GPUs currently available in 3XS HGX AI servers in more detail below.
H200 | H100 | |
---|---|---|
ARCHITECTURE | Hopper | Hopper |
FORM FACTOR | SXM5 | SXM5 |
GPU | H200 | H100 |
CUDA CORES | 16,896 | 16,896 |
TENSOR CORES | 528 4th gen | 528 4th gen |
MEMORY | 141GB HBM3e | 80GB HMB3e |
ECC MEMORY | ✔ | ✔ |
MEMORY CONTROLLER | 5,120-bit | 5,120-bit |
NVLINK SPEED | 900GB/sec | 900GB/sec |
TDP | 700W | 700W |
AI-Ready Software Stack
3XS AI servers are provided with a custom software stack to speed up your AI training. This includes the latest Ubuntu operating system, Docker for creating, sharing and running individual containers on a single node and TensorFlow, a popular framework for building and deploying AI applications.
Powered by NVIDIA GPUs, 3XS AI servers are the perfect tool for unlocking the comprehensive catalogue of optimised software tools for AI and HPC provided in the NVIDIA GPU Cloud (NGC). This is further enhanced by NVIDIA AI Enterprise.
NVIDIA AI Enterprise
NVIDIA AI Enterprise unlocks access to a wide range of frameworks that accelerate the development and deployment of AI projects. Leveraging pre-configured frameworks removes many of the manual tasks and complexity associated with software development, enabling you to deploy your AI models faster as each framework is tried, tested and optimised for NVIDIA GPUs. The less time spent developing, the greater the ROI on your AI hardware and data science investments.
Rather than trying to assemble thousands of co-dependent libraries and APIs from different authors when building your own AI applications, NVIDIA AI Enterprise removes this pain point by providing the full AI software stack including applications such as healthcare, computer vision, speech and generative AI.
Enterprise-grade support is provided, 9x5 with a 4-hour SLA with direct access to NVIDIA’s AI experts, minimising risk and downtime, while maximising system efficiency and productivity.
FIND OUT MORE >Workload Management
Run:ai software allows intelligent resource management and consumption so that users can easily access GPU fractions, multiple GPUs or clusters of servers for workloads of every size and stage of the AI lifecycle. This ensures that all available compute can be utilised and GPUs never have to sit idle. Run:ai’s scheduler is a simple plug-in for Kubernetes clusters and adds high-performance orchestration to your containerised AI workloads.
FIND OUT MORE >AI Optimised Storage
AI Optimised storage appliances ensure that your NVIDIA DGX systems are being utilised as much as possible and always working at maximum efficiency. Scan AI offers software-defined storage appliances powered by PEAK:AIO and further options from leading brands such as Dell-EMC, NetApp and DDN to ensure we have an AI optimised storage solution that is right for you.
FIND OUT MORE >Managed Hosting Solutions
AI projects scale rapidly and can consume huge amounts of GPU-accelerated resource alongside significant storage and networking overheads. To address these challenges, the Scan AI Ecosystem includes managed hosting options. We’ve partnered with a number of secure datacentre partners to deliver tailor-made hardware hosting environments delivering high performance and unlimited scalability, while providing security and peace of mind. Organisations maintain control over their own systems but without the day-to-day admin or complex racking, power and cooling concerns, associated with on-premise infrastructure.
FIND OUT MORE >Select your 3XS HGX Custom AI Server
3XS AI servers are available in a range of base configurations that can be customised to your requirements.
GPUs | Up to 8x NVIDIA SXM H100 or H200 |
CPUs | 2x Intel Xeon up to 64-cores |
Cooling | Air / Water |
RAM | Up to 8TB |
Storage | 8x NVMe SSDs, 2x SATA |
Networking | NVIDIA Connect-X NICs or Bluefield DPUs up to 800Gb/s |
Form Factor | 7U |
If you’re still unsure of the best solution for your organisation, contact our team to discuss your projects or requirements and to arrange a free GUIDED PROOF OF CONCEPT TRIAL Alternatively, you can train AI models on any device via virtualised NVIDIA GPUs hosted on our SCAN CLOUD PLATFORM