AI Development Box Buyers Guide

Powered by the latest NVIDIA GPU-accelerators, AI dev boxes built by Scan 3XS Systems are professional workstations for the first stage of AI projects, enabling data scientists to develop and debug models and create a Minimum Viable Product (MVP) with their data sets. The Scan AI team has designed and developed a range of AI dev boxes to meet all requirements and budgets - this page will guide you through what to consider when choosing a workstation for machine learning, deep learning and AI.

Dev Box Pro

Try before you buy

Scan AI Dev Boxes can be evaluated online via a Proof of Concept

Book a Test Drive

What makes Scan 3XS Systems dev boxes special?

Scan’s in-house experts at 3XS Systems have been at the forefront of the AI industry, pioneering the concept of the dev box in 2016, so we have a huge wealth of experience and deep understanding of how to build amazing AI workstations. We believe quality components build a quality PC and are proud to partner with the world’s leading brands, and on every product page you can see the make and model of every component that goes into our dev boxes.

Image

NVIDIA Elite Partner

Scan has been an accredited NVIDIA Elite Partner since 2017, awarded for our expertise in the areas of deep learning and AI.

Image

AI Optimised

Our in-house team includes data scientists who optimise the configuration and software stack of each system for AI workloads.

Image

Whisper Quiet

Only hear what matters – select configurations are watercooled and so are much quieter than air-cooled PCs.

Image

Trusted by you

Scan 3XS Systems AI dev boxes are trusted by organisations including the NHS, University of Liverpool and University of Strathclyde.

Image

7 Days Support

Our technical support engineers are available seven days a week to help with any queries.

Image

3 Years Warranty

3XS Systems include a three-year warranty, so if anything goes faulty we’ll repair or replace it.

GPU Accelerator

Dev Box Pro

The GPU accelerator is the most important component in an AI dev box as it the main driver for rapid processing and accuracy in your model development and training. We recommend NVIDIA Professional GPUs for maximum performance and consumer-grade NVIDIA GeForce GPUs for value for money. All NVIDIA GPUs include Tensor cores which are specifically design to accelerate deep learning workloads.

The following table highlights the key specifications of the NVIDIA GPUs we recommend in our AI workstations.

A800 RTX 6000 Ada RTX 5000 Ada RTX A6000 RTX 4500 Ada RTX 4000 ADA RTX A5500 RTX A5000 RTX A4500 RTX A4000 GeForce RTX 4090 GeForce RTX 4080 SUPER GeForce RTX 4080
Architecture Ampere Ada Lovelace Ada Lovelace Ampere Ada Lovelace Ada Lovelace Ampere Ampere Ampere Ampere Ada Lovelace Ada Lovelace Ada Lovelace
CUDA Cores 6,912 18,176 12,800 10,752 7,680 6,144 10,240 8,192 7,168 6,144 16,384 10,240 9,728
Tensor Cores 432 568 400 336 240 192 320 256 224 192 512 320 304
RT Cores 0 142 100 84 60 48 80 64 56 48 128 80 76
Memory 40GB HBM2 48GB GDDR6 36GB GDDR6 48GB GDDR6 24GB GDDR6 20GB GDDR6 24GB GDDR6 24GB GDDR6 20GB GDDR6 16GB GDDR6 24GB GDDR6X 16GB GDDR6X 16GB GDDR6X
ECC Memory
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
No
No
No
Memory Controller 5,120-bit 384-bit 256-bit 384-bit 192-bit 160-bit 384-bit 384-bit 320-bit 256-bit 384-bit 256-bit 256-bit
NVLink 400GB/sec
No
No
112GB/sec
No
No
112GB/sec 112GB/sec 112GB/sec
No
No
No
No
TDP 240W 300W 250W 300W 192W 130W 230W 230W 200W 140W 450W 320W 320W

Software Stack

We recommend and pre-install the latest Linux Ubuntu operating system plus a custom software stack built on NVIDIA CUDA-X. This includes Docker-CE, NVIDIA-Docker2 and NVIDIA-optimised libraries such as RAPIDS, TensorFlow, PyTorch, Caffe and other leading data science software, providing you with accelerated workflows for faster data preparation, model training and data visualisation. We can also install Microsoft Windows 11 Pro on request.

Host Processor

The host processor or CPU plays an important role in a dev box in the data prep stage. We recommend AMD Ryzen and Intel Core processors in most of our systems as they run at a high frequency and are relatively affordable compared to server-grade hardware. We further improve these CPUs capabilities for deep learning by partnering them with workstation-grade motherboards. For systems with two or more GPUs we recommend AMD Threadripper PRO and Intel Xeon-W CPUs as these have more PCIe lanes, enabling the GPUs to communicate efficiently with one another.

AMD Processors

To ensure the best AI model development performance we only recommend the fastest AMD Ryzen processors in our 3XS AI workstations. Ryzen 7 processors represent a basic AI workstation CPU, providing entry-level performance at a low price, with Ryzen 9 processors striking a good balance between performance and cost. For systems with two or more GPU accelerators we recommend using either AMD Threadripper or Threadripper PRO CPUs as these have more PCIe lanes, enabling effective and efficient GPU to GPU communication.

You can learn more about AMD Ryzen CPUs by reading our AMD DESKTOP CPU BUYERS GUIDE

Intel Processors

To ensure the best AI model development performance we only recommend the fastest Intel Core processors in our 3XS AI workstations. Core Ultra 7 processors represent a basic AI workstation CPU, providing entry-level performance at a low price, with Core Ultra 9 processors striking a good balance between performance and cost. For systems with two or more GPU accelerators we recommend using Intel Xeon W CPUs as these have more PCIe lanes, enabling effective and efficient GPU to GPU communication.

You can learn more about Intel Core and Xeon W CPUs by reading our INTEL DESKTOP CPU BUYERS GUIDE

System Memory

While having sufficient VRAM on the GPU accelerator is critically important, system performance will be crippled without adequate optimised system memory to cache data. We recommend configuring systems with the same or greater amount of system RAM as VRAM.

Storage

There’s no point in having the fastest GPU accelerators, CPUs and system memory if they are starved for data. We recommend high performance NVMe SSDs in our AI workstations with a typical read speed of at least 4TB/s, approximately 1,200% faster than a SATA SSD and 2,500% faster than a traditional HDD. That said, we recognise that you may need to store old projects and documents on your workstation, a high-endurance HDD is ideal for this use as they are very cost effective.

Connectivity

Moving data between different systems can be a time-consuming process, so to make the most of the fast data processing capabilities of our AI workstations where possible we use motherboards with integrated 10GbE NICs. 10GbE has the added advantage of being compatible with twisted-pair copper CAT6/6a and CAT7 cabling with RJ45 connectors, so in most offices you won’t need to install new cabling, just a new switch. Where faster speeds are needed we can install either SFP-based Intel X-series NICs or NVIDIA Networking ConnectX SmartNICs for speeds up to 400Gb/s.

Cooling and Power

GPU accelerators consume a lot of power so all our AI workstations are equipped with high-quality 80PLUS rated power supplies, ensuring a reliable and efficient power source for the system. In addition, the cooling system of each workstation is optimised to ensure consistent results each and every time.

Ready to buy?

Click the links below to view the range of Scan 3XS Systems AI dev boxes. We offer pre-configured systems or can custom build a system to your preferred specification.

Pre-configured AI dev boxes >

Configure an AI dev box >