AI Platforms / Deployment

AR/VR

Computer Vision / Video Analytics

Content Creation / Rendering

Conversational AI

Cybersecurity

Data Center / Cloud

Data Science

Development & Optimization

Edge Computing

Generative AI

MLOps

Models / Libraries / Frameworks

Networking / Communications

Robotics

Simulation / Modeling / Design

 

 

=========================

Architecture / Engineering / Construction

Aerospace

Gaming

Financial Services

Smart Cities  /Spaces

 

=================================================================

Virtual Full day Workshops:

Building LLM applications with Prompt engineering

Instructor-Led Workshop: Building LLM Applications With Prompt Engineering

Monday, March 17, 09:00-17:00 CET (UTC+01:00)


With the incredible capabilities of large language models (LLMs), enterprises are eager to integrate them into their products and internal applications for a wide variety of use cases. These include text generation, large-scale document analysis, chatbot assistants, and more.

The fastest way to begin using LLMs for diverse tasks is with modern prompt engineering techniques. These techniques are also foundational for more advanced LLM-based methods, such as Retrieval-Augmented Generation (RAG) and Parameter-Efficient Fine-Tuning (PEFT). In this workshop, you’ll work with an NVIDIA language model NIM, powered by the open-source Llama-3.1 large language model, alongside the popular LangChain library. The workshop will provide a foundational skill set for building a range of LLM-based applications using prompt engineering.

You’ll learn how to:

  • Understand how to apply iterative prompt engineering best practices to create LLM-based applications for various language-related tasks.
  • Be proficient in using LangChain to organize and compose LLM workflows.
  • Write application code to use LLMs for generative tasks, document analysis, chatbot applications, and more.

 

 

Prerequisite(s):

  • Familiarity with basic programming fundamentals, such as functions and variables.

 

 

Certificate: Upon successful completion of the assessment, you’ll receive an NVIDIA certificate to recognize your subject matter competency and support your professional career growth.

 

Fundamentals of Deep Learning

Instructor-Led Workshop: Fundamentals of Deep Learning

Monday, March 17, 09:00-17:00 CET (UTC+01:00)


Explore the basics of deep learning by training and deploying neural networks and using results to improve performance and capabilities.

You’ll learn how to:

  • Implement common deep learning workflows, such as image classification and object detection.
  • Experiment with data, training parameters, network, structure, and other strategies to increase performance and capability.
  • Deploy your neural networks to start solving real-world problems.

 

 

Prerequisite(s):

  • An understanding of fundamental programming concepts in Python 3, such as functions, loops, dictionaries, and arrays.
  • A familiarity of pandas data structures.
  • An understanding of how to compute a regression line.

 

 

Certificate: Upon successful completion of the assessment, you’ll receive an NVIDIA certificate to recognize your subject matter competency and support your professional career growth.

 

Building AI Agents with Multimodal Models

Instructor-Led Workshop: Building AI Agents With Multimodal Models

Tuesday, March 18, 09:00-17:00 CET (UTC+01:00)


Just like how humans have multiple senses to perceive the world around them, more and more computer sensors are being developed to capture a wide variety of data. In the health industry, computed tomography (CT) scans provide a 3D representation to detect potentially dangerous abnormalities. In the robotics industry, lidars help robots see depth and navigate their complex environments. In this course, learners will develop neural network agents that can reason using many different data types by exploring multiple fusion techniques.

You’ll learn about:

    • Different data types and how to make them ready for neural networks.
    • Model fusion, and the differences between early, late, and intermediate fusion.
    • PDF extraction using OCR.
    • The difference between modality and agent orchestration.
    • Customization of NVIDIA AI Blueprints for video search and summarization.

Upon completion, you’ll be able to orchestrate several multimodal agents into an application.

 

 

Prerequisite(s):

      • Basic understanding of Python, including classes, objects, and decorators.
      • Basic understanding of neural networks, such as image convolution and sequential models.

 

 

Certificate: Upon successful completion of the assessment, you’ll receive an NVIDIA certificate to recognize your subject matter competency and support your professional career growth.

 

Fundamentals of Accelerated Data Science [DLIW73639]

Instructor-Led Workshop: Fundamentals of Accelerated Data Science

Tuesday, March 18, 09:00-17:00 CET (UTC+01:00)


Data science is about using scientific methods, processes, algorithms, and systems to analyze and extract insights from data. It lets organizations turn data into a valuable resource, leading to smarter decision-making, improved operations, and enhanced customer experiences. In this workshop, you’ll explore how to use GPU-accelerated tools to conduct data science faster, leading to more scalable, reliable, and cost-effective results.

You’ll learn how to:

  • Use cuDF to accelerate pandas, Polars, and Dask for analyzing datasets of all sizes efficiently.
  • Use a wide variety of machine learning algorithms, including XGBoost, for different data science problems.
  • Deploy machine learning models on an NVIDIA Triton™ Inference Server to deliver optimal performance.
  • Learn and apply powerful graph algorithms to analyze complex networks with NetworkX and cuGraph.
  • Perform multiple analysis tasks on massive datasets to stave off a simulated epidemic outbreak affecting the UK.

You’ll also be able to perform and accelerate the end-to-end data science workflow. The speedup will translate into more iteration cycles, better performance, and improved productivity.

 

 

Prerequisite(s):

  • Experience with Python, ideally including pandas and NumPy.

 

 

Certificate: Upon successful completion of the assessment, you’ll receive an NVIDIA certificate to recognize your subject matter competency and support your professional career growth.

 

Rapid Application Development Using Large Language Models (LLMs)

Instructor-Led Workshop: Rapid Application Development Using Large Language Models (LLMs)

Tuesday, March 18, 09:00-17:00 CET (UTC+01:00)


Most enterprises need to perform multiple language-related tasks every day. This includes organizing documents based on common themes (text classification), writing emails (content generation), removing harmful posts from their online communities (toxicity classification), and much more. Applications powered by LLMs can help enterprises automate these and many other tasks, letting them streamline their operations, decrease expenses, and increase productivity. Alternatively, enterprises can use large language model-powered apps to provide innovative and improved services to clients or strengthen customer relationships. For example, providing customer support with AI companions or using sentiment analysis apps to extract valuable customer insights.

You’ll learn about:

  • The intuitions and best practices underlying LLM architectures and systems, including transformers and context insertion.
  • Techniques to apply state-of-the-art language models to problems spanning vanilla classification, novel content generation, and image/audio reasoning.
  • Engineering and abstraction practices to work with and keep up with the evolving LLM ecosystem, including HuggingFace and LangChain offerings.

You’ll also have a deep understanding of the modern LLM ecosystem, the architectures and design choices that power the models, and the techniques necessary to build your own LLM-powered applications with a well-informed and responsible strategy.

 

 

Prerequisite(s):

  • Professional experience with the Python programming language.
  • Familiarity with fundamental deep learning topics like model architecture, training, and inference.
  • Familiarity with a modern Python-based deep learning framework, preferably PyTorch.

 

 

Certificate: Upon successful completion of the assessment, you’ll receive an NVIDIA certificate to recognize your subject matter competency and support your professional career growth

Building Agentic AI Applications With Large Language Models (LLMs)

Instructor-Led Workshop: Building Agentic AI Applications With Large Language Models (LLMs)

Wednesday, March 19, 09:00-17:00 CET (UTC+01:00)


The bar for what AI-powered agents can do has been steadily rising over the past few years. New breakthroughs allow them to not only engage in conversations, but also use tools, conduct research, and execute on complex objectives at scale. This course helps you develop sophisticated agent systems that can execute on deep thought, research, software calling, and distributed operation.

You’ll gain hands-on experience in designing agents that efficiently retrieve and refine information, intelligently route queries, and execute tasks concurrently using orchestration tools like LangGraph and deployment abstractions like NVIDIA NIM™ microservices. By the end of the course, you’ll be proficient in creating scalable, high-performance agent architectures capable of thriving in dynamic, real-world applications.

You’ll learn:

  • Fundamentals of agent systems, including tool-calling, state management, and content pipelines.
  • How to develop concurrent, multi-threaded agents using LangChain and LangGraph.
  • How expert systems can be adapted for real-time data retrieval, processing, and adaptive feedback.
  • How to use NVIDIA NIM and other scalable frameworks for robust, real-world agentic use cases.

 

 

Prerequisite(s):

  • Familiarity working with LLM-based applications.
  • Familiarity with LLM orchestration frameworks like LangChain.
  • Intermediate Python experience.

 

 

Certificate: Upon successful completion of the assessment, you’ll receive an NVIDIA certificate to recognize your subject matter competency and support your professional career growth.

 

 

 

Efficient Large Language Model (LLM) Customization

 

Instructor-Led Workshop: Efficient Large Language Model (LLM) Customization

Wednesday, March 19, 09:00-17:00 CET (UTC+01:00)

Enterprises need to execute language-related tasks daily, such as text classification, content generation, sentiment analysis, and customer chat support. And they need to do so in the most cost-effective way. Large language models can automate these tasks, and efficient LLM customization techniques can increase a model’s capabilities and reduce the size of models required for use in enterprise applications.

In this course, you’ll go beyond prompt-engineering LLMs and learn techniques to efficiently customize pretrained LLMs for your specific use cases. We’ll cover how to do this without engaging in the computationally intensive and expensive process of pretraining your own model or fine-tuning a model’s internal weights. Using NVIDIA NIM microservices, NeMo Curator, and NeMo Framework, you’ll learn various parameter-efficient fine-tuning methods to customize LLM behavior for your organization.

Prerequisite(s):

  • Professional experience with the Python programming language.
  • Familiarity with fundamental deep learning topics like model architecture, training, and inference.

Certificate: Upon successful completion of the assessment, you’ll receive an NVIDIA certificate to recognize your subject matter competency and support your professional career growth.

Fundamentals of Accelerated Computing With Modern CUDA C++

Instructor-Led Workshop: Fundamentals of Accelerated Computing With Modern CUDA C++

Wednesday, March 19, 09:00-17:00 CET (UTC+01:00)


This workshop provides a comprehensive introduction to general-purpose GPU programming with NVIDIA® CUDA®. You’ll learn how to write, compile, and run GPU-accelerated code, use CUDA core libraries to tap into the power of massive parallelism provided by modern GPU accelerators, optimize memory migration between CPU and GPU, and implement your own algorithms.

At the conclusion of the workshop, you’ll have an understanding of the fundamental concepts and techniques for accelerating C++ code with CUDA and be able to:

  • Write and compile code that runs on the GPU.
  • Optimize memory migration between CPU and GPU.
  • Use powerful parallel algorithms that simplify adding GPU acceleration to your code.
  • Implement your own parallel algorithms by directly programming GPUs with CUDA kernels.
  • Use concurrent CUDA streams to overlap memory traffic with compute.
  • Know where, when, and how to best add CUDA acceleration to existing CPU-only applications.

 

 

Prerequisite(s):

    • Basic C++ competency, including familiarity with lambda expressions, loops, conditional statements, functions, standard algorithms, and containers.
    • No previous knowledge of CUDA programming is assumed.

 

 

Certificate: Upon successful completion of the assessment, you’ll receive an NVIDIA certificate to recognize your subject matter competency and support your professional career growth.

 

 

 

Building RAG Agents With LLMs

Instructor-Led Workshop: Building RAG Agents With LLMs

Thursday, March 20, 09:00-17:00 CET (UTC+01:00)


Agents powered by large language models (LLMs) are quickly gaining popularity from both individuals and companies as people are finding new emerging capabilities and opportunities to greatly improve their productivity. An especially powerful recent development has been the popularization of retrieval-based LLM systems that can hold informed conversations by using tools, looking at documents, and planning their approaches. These systems are exciting to experiment with and offer unprecedented opportunities to make life easier, but also require many queries to large deep learning models and need to be implemented efficiently. This course will demonstrate how you can deploy an agent system in practice and scale up your system to meet the demands of your customers.

You’ll learn

  • The components needed to create a simple RAG agent that considers the set of documents provided to it.
  • Organization techniques to structure your system in a modular way, so you can scale up the deployment from a web-hosted service to a dedicated GPU cluster.
  • Tracking schemes to evaluate your system’s efficiency and quality of output.

 

 

Prerequisite(s):

    • Intermediate understanding of Python, LangChain comfort, and comfort with LLMs.
    • RAG 101/Prompt-Eng/LangChain recommended.

 

 

Certificate: Upon successful completion of the assessment, you’ll receive an NVIDIA certificate to recognize your subject matter competency and support your professional career growth.

 

 

Deploying RAG Pipelines for Production at Scale

Instructor-Led Workshop: Deploying RAG Pipelines for Production at Scale

Thursday, March 20, 09:00-17:00 CET (UTC+01:00)


Retrieval-Augmented Generation (RAG) pipelines are revolutionizing enterprise operations. However, most existing tutorials stop at proof-of-concept implementations that falter when scaling. This workshop aims to bridge that gap, focusing on building scalable, production-ready RAG pipelines powered by NVIDIA NIM microservices and Kubernetes. You will gain hands-on experience deploying, monitoring, and scaling RAG pipelines with the NIM Operator and learn best practices for infrastructure optimization, performance monitoring, and handling high traffic volumes.

We’ll begin by building a simple RAG pipeline using the NVIDIA API catalog, then deploy and test individual components in a local environment using Docker Compose. Once familiar with the basics, we’ll shift to deploying NIMs, such as LLM, NeMo Retriever Text Embedding, and NeMo Retriever Text Reranking, in a Kubernetes cluster using the NIM Operator. This will include managing the deployment, monitoring, and scalability of NVIDIA NIM microservices. Building on these deployments, we’ll cover constructing a production-grade RAG pipeline using the deployed NIMs and explore NVIDIA’s blueprint for PDF ingestion, learning how to integrate it into the RAG pipeline.

To ensure operational efficiency, we’ll introduce Prometheus and Grafana for monitoring pipeline performance, cluster health, and resource utilization. Scalability will be addressed through the use of the Kubernetes Horizontal Pod Autoscaler (HPA) for dynamically scaling NIMs based on custom metrics in conjunction with the NIM Operator. Custom dashboards will be created to visualize key metrics and interpret performance insights.

You’ll be able to:

  • Build a simple RAG pipeline using API endpoints, deployed locally with Docker Compose.
  • Deploy a variety of NVIDIA NIM microservices in a Kubernetes cluster using the NIM Operator.
  • Combine NIMs into a cohesive, production-grade RAG pipeline and integrate advanced data ingestion workflows.
  • Monitor RAG pipelines and Kubernetes clusters with Prometheus and Grafana.
  • Scale NIMs to handle high traffic using the NIM Operator.
  • Create, deploy, and scale RAG pipelines for a variety of agentic workflows, including PDF ingestion.

 

Prerequisite(s):

  • Familiarity working with LLM-based applications.
  • Familiarity with RAG pipelines.
  • Familiarity working with Kubernetes.
  • Familiarity working with Helm.

 

Certificate:Upon successful completion of the assessment, you’ll receive an NVIDIA certificate to recognize your subject matter competency and support your professional career growth.