Implémenter une solution Data Analytics Solution avec Azure Databricks (DP-3011)
- Référence M-DP3011
- Durée 1 Jour
- Langue Anglais
Modalité pédagogique
Classe inter à distance Prix
EUR795.00
hors TVA
Demander une formation en intra-entreprise S'inscrireModalité pédagogique
La formation est disponible dans les formats suivants:
-
Classe inter à distance
Depuis n'importe quelle salle équipée d'une connexion internet, rejoignez la classe de formation délivrée en inter-entreprises.
-
Classe inter en présentiel
Formation délivrée en inter-entreprises. Cette méthode d'apprentissage permet l'interactivité entre le formateur et les participants en classe.
-
Intra-entreprise
Cette formation est délivrable en groupe privé, et adaptable selon les besoins de l’entreprise. Nous consulter.
Demander cette formation dans un format différent
Résumé
Haut de pageImplement a Data Analytics Solution with Azure Databricks
This course explores how to use Databricks and Apache Spark on Azure to take data projects from exploration to production. You’ll learn how to ingest, transform, and analyze large-scale datasets with Spark DataFrames, Spark SQL, and PySpark, while also building confidence in managing distributed data processing. Along the way, you’ll get hands-on with the Databricks workspace—navigating clusters and creating and optimizing Delta tables. You’ll also dive into data engineering practices, including designing ETL pipelines, handling schema evolution, and enforcing data quality. The course then moves into orchestration, showing you how to automate and manage workloads with Lakeflow Jobs and pipelines. To round things out, you’ll explore governance and security capabilities such as Unity Catalog and Purview integration, ensuring you can work with data in a secure, well-managed, and production-ready environment.
Virtual Learning
This interactive training can be taken from any location, your office or home and is delivered by a trainer. This training does not have any delegates in the class with the instructor, since all delegates are virtually connected. Virtual delegates do not travel to this course, Global Knowledge will send you all the information needed before the start of the course and you can test the logins.
Prochaines dates
Haut de page-
- Modalité: Classe inter à distance
- Date: 02 février, 2026 | 9:30 AM to 5:30 PM
- Centre: SITE DISTANT (W. Europe )
- Langue: Français
-
- Modalité: Classe inter à distance
- Date: 02 mars, 2026 | 9:00 AM to 5:00 PM
- Centre: SITE DISTANT (W. Europe )
- Langue: Anglais
-
- Modalité: Classe inter à distance
- Date: 13 avril, 2026 | 10:30 AM to 6:00 PM
- Centre: SITE DISTANT (W. Europe )
- Langue: Anglais
-
- Modalité: Classe inter à distance
- Date: 01 juin, 2026 | 9:30 AM to 5:30 PM
- Centre: SITE DISTANT (W. Europe )
- Langue: Français
-
- Modalité: Classe inter à distance
- Date: 30 juin, 2026 | 9:00 AM to 5:00 PM
- Centre: SITE DISTANT (W. Europe )
- Langue: Anglais
-
- Modalité: Classe inter à distance
- Date: 10 août, 2026 | 10:30 AM to 6:00 PM
- Centre: SITE DISTANT (W. Europe )
- Langue: Anglais
Objectifs de la formation
Haut de pageStudents will learn to,
- Explore Azure Databricks
- Perform data analysis with Azure Databricks
- Use Apache Spark in Azure Databricks
- Manage data with Delta Lake
- Build data pipelines with Delta Live Tables
- Deploy workloads with Azure Databricks Workflows
- Use SQL Warehouses in Azure Databricks
- Run Azure Databricks Notebooks with Azure Data Factory
Programme détaillé
Haut de pageModule 1: Implement a Data Analytics Solution with Azure Databricks
- Explore Azure Databricks
- Perform data analysis with Azure Databricks
- Use Apache Spark in Azure Databricks
- Manage data with Delta Lake
- Build Lakeflow Declarative Pipelines
- Deploy workloads with Lakeflow Jobs
Pré-requis
Haut de page
Before taking this course, learners should already be comfortable with the fundamentals of Python and SQL. This includes being able to write simple Python scripts and work with common data structures, as well as writing SQL queries to filter, join, and aggregate data. A basic understanding of common file formats such as CSV, JSON, or Parquet will also help when working with datasets. In addition, familiarity with the Azure portal and core services like Azure Storage is important, along with a general awareness of data concepts such as batch versus streaming processing and structured versus unstructured data. While not mandatory, prior exposure to big data frameworks like Spark, and experience working with Jupyter notebooks, can make the transition to Databricks smoother.