Skip to main Content

Generative AI with Diffusion Models (NV-GEN-AI-DM)

  • Código del Curso GK847006
  • Duración 1 Día

Otros Métodos de Impartición

Cerrado Precio

Por favor contáctenos

Solicitar Formación Grupal Inscribirse

Salta a:

Método de Impartición

Este curso está disponible en los siguientes formatos:

  • Cerrado

    Cerrado

Solicitar este curso en un formato de entrega diferente.

Take a deeper dive into denoising diffusion models, which are a popular choice for text-to-image pipelines.

Thanks to improvements in computing power and scientific theory, generative AI is more accessible than ever before. Generative AI plays a significant role across industries due to its numerous applications, such as creative content generation, data augmentation, simulation and planning, anomaly detection, drug discovery, personalized recommendations, and more. In this course, learners will take a deeper dive into denoising diffusion models, which are a popular choice for text-to-image pipelines.

Company Events

These events can be delivered exclusively for your company at our locations or yours, specifically for your delegates and your needs. The Company Events can be tailored or standard course deliveries.

Calendario

Parte superior

Objetivos del Curso

Parte superior
  • Build a U-Net to generate images from pure noise
  • Improve the quality of generated images with the denoising diffusion process
  • Control the image output with context embeddings
  • Generate images from English text prompts using the Contrastive Language—Image
  • Pretraining (CLIP) neural network

Module 1: From U-Net to Diffusion

  • Build a U-Net architecture.
  • Train a model to remove noise from an image.

Module 2:Diffusion Models

  • Define the forward diffusion function.
  • Update the U-Net architecture to accommodate a timestep.
  • Define a reverse diffusion function.

Module 3: Optimizations · Implement Group Normalization.

  • Implement GELU.
  • Implement Rearrange Pooling.
  • Implement Sinusoidal Position Embeddings.

Module 4: Classifier-Free Diffusion Guidance

  • Add categorical embeddings to a U-Net.
  • Train a model with a Bernoulli mask.

Module 5: CLIP

  • Learn how to use CLIP Encodings.
  • Use CLIP to create a text-to-image neural network.