These custom-built deep learning edge AI devices are optimised for inferencing at the edge are therefore ruggedised to cope with a wide range of environments. They are based on the latest embedded AGX embedded NVIDIA GPU accelerators and are modular and fully configurable to meet your requirements. If you can't see the specification you would like please call 01204 474747 or email [email protected].
These compact edge AI systems are powered by the NVIDIA Jetson platform. Ruggedised for use in a wide variety of field environments including industrial, healthcare, smart city and surveillance applications.
These custom-built deep learning servers are optimised for retraining AI models at the edge and are therefore ruggedised to cope with a wide range of environments. They are based on the latest PCIe NVIDIA GPU accelerators and are fully configurable to meet your requirements. If you can't see the specification you would like please call 01204 474747 or email [email protected].
The Advantech SKY-6100 is an industrial GPU server for edge retraining of your AI models. This high density 1U server supports up to five NVIDIA Tesla T4 GPU accelerators with two host processors from the Intel Xeon range.
The Advantech SKY-6200 is an industrial GPU server for edge retraining of your AI models. This high density 2U server supports up to four NVIDIA Tesla T4 or V100S GPU accelerators with two host processors from the Intel Xeon range.
The Advantech SKY-6400 is an industrial GPU server for edge retraining of your AI models. This 4U server supports up to four NVIDIA Tesla T4 or V100S GPU accelerators with two host processors from the Intel Xeon range.
The Advantech SKY-6420 is an industrial GPU server for edge retraining of your AI models. This 4U server supports up to 10 NVIDIA Tesla V100S GPU accelerators with two host processors from the Intel Xeon range.
These custom-built deep learning servers are optimised for inferencing and are based on the latest PCIe NVIDIA GPU accelerators. The following systems are fully configurable to meet your requirements, if you can't see the specification you would like please call 01204 474747 or email [email protected].
This compact 1U server can support up to two NVIDIA A2 GPUs. These GPUs feature a mix of CUDA and Tensor cores and are specially designed to accelerate inferencing workloads, with up to 36x speed up compared to inferencing on a CPU. The host processor is a single AMD EPYC CPU and the server can support multiple SSDs and hard disks.