Scan AI

Get in touch with our AI team.

Education & Training Services

Further your AI knowledge by signing up to our of our instructor-led courses

 

As the UK’s leading AI provider, Scan AI is also certified to deliver vendor-certified education and training courses. We offer NVIDIA Deep Learning Institute (DLI) courses and NVIDIA Ideation Workshops, alongside a range of Software Webinars aimed at demonstrating the benefits of selected AI applications. Click the tabs below to explore each further.

Investigation

• Involve all stakeholders to establish where you are in your AI journey, what are your goals and use cases

• Explore how to accelerate the business, reduce time to insight, and achieve ROI


Planning

• Map your goals to a schedule of activities and set priorities

• Create a roadmap to start using AI in your business


Implementation

• Run pilot projects to gain momentum

• How to build an in-house AI team and provide in-house AI training

• Managing internal and external communications

What is an NVIDIA AI Ideation Workshop?

In collaboration with NVIDIA, the Scan AI team is able to provide an Ideation virtual workshop for your organisation. In this workshop, you will be able to evaluate your existing AI projects and wider strategy, or use the day to formulate an AI plan from scratch. This will be done in collaboration with experts in AI and deep learning practices from both Scan AI and NVIDIA. Following the workshop you will receive a written report with recommendations and guidance as how to implement your plans.

Your workshop will be completely subsidised by Scan and NVIDIA - with no charges and no obligation to purchase anything. Our goal is to promote the wider use of AI technology and show you the possibilities within your industry vertical.

An Example AI Ideation Workshop Agenda

9:00-9:15 Introductions
9:15-9:30 Workshop Overview - Confirmation of goals
9:30-10:00 AI/Data Science Current State - what has been done, what worked, what didn't.
10:00-10:30 AI/Data Science Future State - 1, 3 and 5 year desired state
10:30-10:45 Break
10:45-12:30 Use Case exploration - most applicable with ROI
12:30-13:00 Lunch
13:00-14:00 Data Exploration - What data sources are available, how ready for AI?
14:00-15:00 Architecture Exploration - what is current and planned architecture for AI?
15:00-15:15 Break
15:15-16:00 Summary and Initial feedback, indication of AI readiness scale 1-10
BOOK A WORKSHOP

Learn

• Learn from technical industry experts and instructors

• Gain hands-on experience with the most widely used, industry-standard software, tools, and frameworks


Qualify

• Earn an NVIDIA DLI certificate in select courses to demonstrate subject matter competency and support professional career growth


Implement

• Access GPU-accelerated servers in the cloud to complete hands-on exercises

• Build production-quality solutions with the same DLI base environment containers used in the courses, available from the NVIDIA NGC catalogue

NVIDIA Deep Learning Institute

The NVIDIA DLI offers hands-on training for developers, data scientists, and researchers looking to solve challenging problems with deep learning and accelerated computing. Our DLI courses are delivered by qualified instructors who are in the perfect position to pass on their knowledge and educate developers on how to get the most from this rapidly evolving field. The DLI also teaches you how to optimise your code for performance using NVIDIA, CUDA and OpenACC.

Explore Our Upcoming Courses

Data Parallelism: How to Train Deep Learning Models on Multiple GPUs

Introduction (15 mins) • Meet the instructor
• Create an account at COURSES.NVIDIA.COM/JOIN
Stochastic Gradient Descent and the Effects of Batch Size (120 mins) • Learn the significance of stochastic gradient descent when training on multiple GPUs.
• Understand the issues with sequential single-thread data processing and the theory behind speeding up applications with parallel processing.
• Understand loss function, gradient descent, and stochastic gradient descent (SGD).
• Understand the effect of batch size on accuracy and training time with an eye towards its use on multi-GPU systems.
Break (60 mins)
Training on Multiple GPUs with PyTorch Distributed Data Parallel (DDP) (120 mins) • Learn to convert single GPU training to multiple GPUs using PyTorch Distributed Data Parallel.
• Understand how DDP coordinates training among multiple GPUs.
• Refactor single-GPU training programs to run on multiple GPUs with DDP.
Break (15 mins)
Maintaining Model Accuracy when Scaling to Multiple GPUs (90 mins) • Understand and apply key algorithmic considerations to retain accuracy when training on multiple GPUs.
• Understand what might cause accuracy to decrease when parallelizing training on multiple GPUs.
• Learn and understand techniques for maintaining accuracy when scaling training to multiple GPUs.
Workshop Assessment (30 mins) • Use what you have learned during the workshop: complete the workshop assessment to earn a certificate of competency.
Final Review (15 mins) • Review key learnings and wrap up questions.
• Take the workshop survey.
Networking (30 mins) • Discuss your AI projects with the Scan AI data science team
• Make a follow-up appointment

Next Course Date - 28th November 2024

BOOK THIS COURSE

Generative AI with Diffusion Models

Introduction (15 mins) • Meet the instructor.
• Create an account at courses.nvidia.com/join
From U-Nets to Diffusion (60 mins) • Build a U-Net, a type of autoencoder for images.
• Learn about transposed convolution to increase the size of an image.
• Learn about non-sequential neural networks and residual connections.
• Experiment with feeding noise through the U-Net to generate new images.
Break (10 mins)
Control with Context (60 mins) • Learn how to alter the output of the diffusion process by adding context embeddings.
• Add additional model optimizations such as Sinusoidal Position Embeddings, The GELU activation function, Attention.
Text-to-Image with CLIP (60 mins) • Walk through the CLIP architecture to learn how it associates image embeddings with text embeddings.
• Use CLIP to train a text-to-image diffusion model.
Break (60 mins)
State-of-the-art Models (60 mins) • Review various state-of-the-art generative ai models and connect them to the concepts learned in class.
• Discuss prompt engineering and how to better influence the output of generative AI models.
• Learn about content authenticity and how to build trustworthy models.
Final Review (60 mins) • Review key learnings and answer questions.
• Complete the assessment and earn a certificate.
• Complete the workshop survey.
• Learn how to set up your own AI application development environment.
Networking (30 mins) • Discuss your AI projects with the Scan AI data science team
• Make a follow-up appointment

Next Course Date - 12th December 2024

BOOK THIS COURSE

Rapid Application Development with Large Language Models

Introduction (15 mins) • Meet the instructor.
• Create an account at courses.nvidia.com/join
From Deep Learning to Large Language Models (75 mins) • Learn how large language models are structured and how to use them.
• Review deep learning- and class-based reasoning, and see how language modeling falls out of it.
• Discuss transformer architectures, interfaces, and intuitions, as well as how they scale up and alter to make state-of-the-art LLM solutions.
Break (15 mins)
Specialized Encoder Models (45 mins) • Learn how to look at the different task specifications.
• Explore cutting-edge HuggingFace encoder models.
• Use already-tuned models for interesting tasks such as token classification, sequence classification, range prediction, and zero-shot classification.
Break (60 mins)
Encoder-Decoder Models for Seq2Seq (75 mins) • Learn about forecasting LLMs for predicting unbounded sequences.
• Introduce a decoder component for autoregressive text generation.
• Discuss cross-attention for sequence-as-context formulations.
• Discuss general approaches for multi-task, zero-shot reasoning.
• Introduce multimodal formulation for sequences, and explore some examples.
Decoder Models for Text Generation (45 mins) • Learn about decoder-only GPT-style models and how they can be specified and used.
• Explore when decoder-only is good, and talk about issues with the formation.
• Discuss model size, special deployment techniques, and considerations.
• Pull in some large text-generation models, and see how they work.
Break (15 mins)
Stateful LLMs (60 mins) • Learn how to elevate language models above stochastic parrots via context injection.
• Show off modern LLM composition techniques for history and state management.
• Discuss retrieval-augmented generation (RAG) for external environment access.
Assessment and Q&A (60 mins) • Review key learnings.
• Take a code-based assessment to earn a certificate.
Networking (30 mins) • Discuss your AI projects with the Scan AI data science team
• Make a follow-up appointment

Next Course Date - 12th February 2025

BOOK THIS COURSE

Run:ai

Run:ai enables you to maximise GPU utilisation by pooling disparate compute resource and enabling intelligent scheduling and allocation.


Weights & Biases

W&B helps manage your AI workflows end-to-end by quickly tracking experiments and iterations, evaluating model performance, reproducing models.


UbiOps

The UbiOps platform helps teams to quickly run their AI workloads as reliable and secure micro-services, without upending their existing workflows.


Supervisely

Supervisely helps you develop AI faster and better with on-premise, enterprise-grade solutions for every task - from labelling to building production models.


Yellowdog

Yellowdog provides a single interface to control any compute device - on-prem, hybrid or multi-cloud - supporting any operating system.

AI Software Webinars

Software applications are a critical part of any AI workflow, from libraries and frameworks in data preparation and model development through to GPU virtualisation during training and computer vision and orchestration. Our range of webinars aim to show you first-hand how AI-specific applications can revolutionise your productivity, visualisation or time to results.

Explore Our Upcoming Webinars

Run:ai

What is Run:ai? Run:ai software is a scheduling and orchestration platform, that creates virtual ‘pools’ of GPU resource, so they can be dynamically allocated as tasks require.
Why do I need Run:ai? Run:ai's platform revolutionises AI and machine learning operations by addressing key infrastructure challenges through dynamic GPU resource allocation, comprehensive AI lifecycle support, and strategic resource management. By pooling GPU resources across environments and utilising advanced orchestration and accelerators, Run:ai significantly enhances GPU efficiency and workload capacity. This results in significant increases in GPU availability, workloads, and GPU utilisation, all with zero manual resource intervention, accelerating innovation and providing a scalable, agile, and cost-effective solution for enterprises.
What will I learn on this webinar? Our webinars are led by one of our in-house data scientists, who will show you how to:

Set up batch scheduling of your workloads
Reduce GPU idleness and increase cluster utilisation with job queueing and opportunistic batch job scheduling

Ensure equity amongst workgroups
Prevent resource contention with over quota priorities and automatic job preemption and fairshare resource allocation

Get the most from the user-friendly interface
Real-time and historical metrics by job, workload, and team in a single dashboard. Assign compute guarantees to critical workloads, promote oversubscription, and react to business needs easily.

FREE

Next Webinar Date - TBC

BOOK THIS WEBINAR

Weight & Biases

What is Weights & Biases? Weights & Biases (W&B) is a software platform to ensure transparency and explainability across the ML lifecycle, providing end-to-end AI oversight.
Why do I need Weights & Biases? W&B helps your ML team unlock their productivity by optimising, visualising, collaborating on, and standardising their model and data pipelines – regardless of framework, environment, or workflow. Think of W&B like GitHub for machine learning models, allowing you to build models faster, fine-tune LLMs, develop GenAI applications with confidence, all in one system of record developers are excited to use.
What will I learn on this webinar? Our webinars are led by one of our in-house data scientists, who will show you how to:

Track and compare your models
Quickly and easily implement experiment logging by adding just a few lines to your Python script and start logging results.

Visualise your experiments
See model metrics stream live into interactive graphs and tables, making it easy to see how your latest ML model is performing compared to previous experiments.

Debug performance in real-time
See how your model is performing and identify problem areas during training, via rich media including images, video, audio, and 3D objects.

FREE

ex VAT per person

Next Webinar Date - TBC

BOOK THIS WEBINAR

UbiOps

What is UbiOps? UbiOps is an orchestration platform that enables your team to run your AI and ML workloads as reliable, secure micro-services and deploy modular applications, across local and (hybrid) cloud environments.
Why do I need UbiOps? UbiOps software integrates seamlessly into your data science workloads within minutes, and avoid the time-consuming burden of setting up and managing expensive cloud infrastructure. Whether you are a start-up looking to launch an AI product, or a data science team at a large organisation, the UbiOps platform will be act as a reliable backbone for any AI or ML service.
What will I learn on this webinar? Our webinars are led by one of our in-house data scientists, who will show you how to:

Get started in seconds
Run your first job in minutes with zero MLOps or DevOps experience. Manage countless AI workloads simultaneously from a single control plane.

Deploy LLMs in your own private environment
Easily deploy off-the-shelf foundation models like LLMs and Stable Diffusion in your private projects and run them on your own infrastructure.

Get the most from security features
Robust security features such as end-to-end encryption, secure data storage, and access controls, alongside business compliance with regulations such as GDPR and SOC 2, and use off-the-shelf generative models without worrying about data privacy.

FREE

ex VAT per person

Next Webinar Date - TBC

BOOK THIS WEBINAR

Supervisely

What is Supervisely? Supervisely is a platform for computer vision that offers AI assisted labeling and integration with Jupyter notebooks.
Why do I need Supervisely? Label images, videos, LiDAR 3D sensor fusion or even DICOM volumes with Supervisely software. Manage datasets, collaborate together and train neural networks, with a platform that integrates countless open-source tools and custom built solutions within a single ecosystem using Supervisely Apps — interactive web-apps running in your browser, yet powered by Python.
What will I learn on this webinar? Our webinars are led by one of our in-house data scientists, who will show you how to:

Use powerful labelling suites
Packed with advanced annotation tools, Supervisely provides a comprehensive set of features that distinguishes it from other labeling tools.

Organise data and users at scale
Management and collaboration tools that help you sort your valuable assets out and perform quality and performance monitoring and automate routine operations.

Benefit from enterprise-grade features
Supervisely Enterprise Edition (EE) is built for companies that want to scale their AI infrastructure, available in both cloud and self-hosted installation, featuring more user governance and security features.

FREE

ex VAT per person

Next Webinar Date - TBC

BOOK THIS WEBINAR

Yellowdog

What is Yellowdog? The Yellowdog platform is a cloud workload management interface. It makes multi-cloud and hybrid cloud management and orchestration easy
Why do I need Yellowdog? Scaling applications across the cloud can be challenging. Without aligning computing resources with your workload’s fluctuating requirements, you could end up exceeding your budgets and missing your deadlines. The YellowDog Platform, allows you to benefit from the flexibility the cloud has to offer, but without any limitations. The powerful scheduler works seamlessly alongside intelligent provisioning tools, so you can manage your workloads across multiple geographical regions, machine types and cloud instances.
What will I learn on this webinar? Our webinars are led by one of our in-house data scientists, who will show you how to:

Take advantage of rapid scaling
Access unparalleled scale and easily manage computing resources from multiple providers, regions, and machine types.

Set up dynamic clusters
Respond to the specific needs of your workload with computing clusters that are composable and expand and contract, as required.

Get the most from the user-friendly interface
Deliver workloads on time and on budget with a solution that is cloud native, fault tolerant and accessible via an easy-to-use portal.

FREE

ex VAT per person

Next Webinar Date - TBC

BOOK THIS WEBINAR