Get in touch with our AI team.
3XS MGX Custom AI Servers
Bespoke modular 1U and 2U hyper-dense designs for AI
AI Training & Inferencing Servers by 3XS Systems
3XS custom AI servers for training and inferencing are based on NVIDIA-certified MGX designs. Powered by NVIDIA Grace superchips, Grace CPUs or Intel CPUs combined with NVIDIA PCIe GPUs, these super dense servers provide the ultimate in flexibility and expansion ability for the present and future GPUs, DPUs and CPUs.
We provide a custom software stack that means you spend more time on your model training and less time configuring drivers, libraries and APIs.
CONTACT US TO DISCUSS YOUR REQUIREMENTS >Accelerated AI with NVIDIA
Hyper-dense MGX systems feature the powerful the NVIDIA GH200 Grace Hopper Superchip which combines GPU and CPU functionality in one module. GH200 combines a 72-core ARM-based NVIDIA Grace CPU with an H100 Tensor Core GPU on a single chip, connected by NVIDIA NVLink-C2C with coherent CPU and GPU memory. With 900 GB/s performance, the superchip is 7x faster than PCIe Gen5, and with HBM3e GPU memory, it supercharges accelerated computing and generative AI. GH200 runs all NVIDIA software stacks and platforms, including NVIDIA AI Enterprise, the HPC SDK, and Omniverse.
Alternatively, you can choose MGX systems with either NVIDIA Grace or Intel Xeon CPUs, both with NVIDIA PCIe GPUs. Explore the range of configurations currently available in 3XS MGX AI servers in more detail below.
GH200 Superchip | H100 NVL | L40S | |
---|---|---|---|
ARCHITECTURE | Grace (CPU), Hopper (GPU) | Hopper | Ada Lovelace |
FORM FACTOR | Superchip Module | PCIe 5 | PCIe 4 |
CPU | ARM Neoverse V2 | NVIDIA Grace or Intel Xeon | NVIDIA Grace or Intel Xeon |
SYSTEM RAM | 480GB | Configurable | Configurable |
GPU | H100 | H100 | AD102 |
CUDA CORES | 16,896 | 16,896 | 18,176 |
TENSOR CORES | 528 4th gen | 528 4th gen | 568 4th gen |
MEMORY | 144GB HBM3 | 94GB HBM3 | 48GB GDDR6 |
ECC MEMORY | ✔ | ✔ | ✔ |
MEMORY CONTROLLER | 5,120-bit | 5,120-bit | 384-bit |
NVLINK SPEED | 900GB/sec NVLink-C2C | 900GB/sec | × |
TDP | 450W-1000W | 300W | 350W |
AI-Ready Software Stack
3XS AI servers are provided with a custom software stack to speed up your AI training. This includes the latest Ubuntu operating system, Docker for creating, sharing and running individual containers on a single node and TensorFlow, a popular framework for building and deploying AI applications.
Powered by NVIDIA GPUs, 3XS AI servers are the perfect tool for unlocking the comprehensive catalogue of optimised software tools for AI and HPC provided in the NVIDIA GPU Cloud (NGC). This is further enhanced by NVIDIA AI Enterprise.
NVIDIA AI Enterprise
NVIDIA AI Enterprise unlocks access to a wide range of frameworks that accelerate the development and deployment of AI projects. Leveraging pre-configured frameworks removes many of the manual tasks and complexity associated with software development, enabling you to deploy your AI models faster as each framework is tried, tested and optimised for NVIDIA GPUs. The less time spent developing, the greater the ROI on your AI hardware and data science investments.
Rather than trying to assemble thousands of co-dependent libraries and APIs from different authors when building your own AI applications, NVIDIA AI Enterprise removes this pain point by providing the full AI software stack including applications such as healthcare, computer vision, speech and generative AI.
Enterprise-grade support is provided, 9x5 with a 4-hour SLA with direct access to NVIDIA’s AI experts, minimising risk and downtime, while maximising system efficiency and productivity.
FIND OUT MORE >Workload Management
Run:ai software allows intelligent resource management and consumption so that users can easily access GPU fractions, multiple GPUs or clusters of servers for workloads of every size and stage of the AI lifecycle. This ensures that all available compute can be utilised and GPUs never have to sit idle. Run:ai’s scheduler is a simple plug-in for Kubernetes clusters and adds high-performance orchestration to your containerised AI workloads.
FIND OUT MORE >AI Optimised Storage
AI Optimised storage appliances ensure that your NVIDIA DGX systems are being utilised as much as possible and always working at maximum efficiency. Scan AI offers software-defined storage appliances powered by PEAK:AIO and further options from leading brands such as Dell-EMC, NetApp and DDN to ensure we have an AI optimised storage solution that is right for you.
FIND OUT MORE >Managed Hosting Solutions
AI projects scale rapidly and can consume huge amounts of GPU-accelerated resource alongside significant storage and networking overheads. To address these challenges, the Scan AI Ecosystem includes managed hosting options. We’ve partnered with a number of secure datacentre partners to deliver tailor-made hardware hosting environments delivering high performance and unlimited scalability, while providing security and peace of mind. Organisations maintain control over their own systems but without the day-to-day admin or complex racking, power and cooling concerns, associated with on-premise infrastructure.
FIND OUT MORE >Select your 3XS MGX Custom AI Server
3XS AI servers are available in a range of base configurations that can be customised to your requirements.
GPUs | 1x NVIDIA GH200 Superchips | 2x NVIDIA GH200 Superchips | Up to 4x NVIDIA PCIe cards | Up to 4x NVIDIA PCIe cards |
CPUs | 1x NVIDIA GH200 Superchips | 2x NVIDIA GH200 Superchips | 2x NVIDIA Grace Superchips | 2x Intel Xeon up to 56-cores |
Cooling | Air / Water | Air | Air | Air |
RAM | 480GB | 960GB | 480GB | Up to 8TB |
Storage | 2x M.2, 8x NVMe E1.S | 2x M.2, 8x NVMe E1.S | 2x M.2, 8x NVMe E1.S | 2x M.2, 8x NVMe E1.S |
Networking | 2x NVIDIA Connect-X NICs or Bluefield-3 DPUs up to 800Gb/s | 2x NVIDIA Connect-X NICs or Bluefield-3 DPUs up to 800Gb/s | NVIDIA Connect-X NICs or Bluefield-3 DPUs up to 800Gb/s | NVIDIA Connect-X NICs or Bluefield-3 DPUs up to 800Gb/s |
Form Factor | 1U | 2U | 2U | 2U |
If you’re still unsure of the best solution for your organisation, contact our team to discuss your projects or requirements and to arrange a free GUIDED PROOF OFCONCEPT TRIAL Alternatively, you can train AI models on any device via virtualised NVIDIA GPUs hosted on our SCAN CLOUD PLATFORM