DP-3028: Implement Generative AI engineering with Azure Databricks



LOCATION | July | August | September | October |
---|---|---|---|---|
Auckland | ||||
Hamilton | ||||
Christchurch | ||||
Wellington | ||||
Virtual Class |
The Implement Generative AI Engineering with Azure Databricks course empowers learners to build, fine-tune, and deploy advanced LLM applications. You'll gain hands-on experience with Retrieval Augmented Generation (RAG), multi-stage reasoning, and responsible AI practices. The DP-3028 course dives deep into evaluating and optimizing language models using Azure Databricks tools like MLflow and Unity Catalog. Learn to fine-tune models, implement LLMOps workflows, and integrate frameworks such as LangChain and DSPy. This course is ideal for AI engineers and developers seeking practical expertise in Generative AI.
This course is best suited to Data Scientists
Before attending this course, students should have familiarity with fundamental AI concepts and Azure Databricks.
After completing this course, students will be able to:
- Describe the fundamentals of Generative AI and Large Language Models (LLMs).
- Implement Retrieval Augmented Generation (RAG) workflows using Azure Databricks.
- Build and deploy multi-stage reasoning systems with popular open-source libraries.
- Fine-tune Azure OpenAI models using customized datasets.
- Evaluate LLMs using key metrics and LLM-as-a-judge frameworks.
- Apply responsible AI principles and implement LLMOps with MLflow and Unity Catalog.
- Describe Generative AI.
- Describe Large Language Models (LLMs).
- Identify key components of LLM applications.
- Use LLMs for Natural Language Processing (NLP) tasks.
- Lab: Explore language models
- Set up a RAG workflow.
- Prepare your data for RAG.
- Retrieve relevant documents with vector search.
- Improve model accuracy by reranking your search results.
- Lab: Set up RAG
- Identify the need for multi-stage reasoning systems.
- Describe a multi-stage reasoning workflow.
- Implement multi-stage reasoning with libraries like LangChain, LlamaIndex, Haystack, and the DSPy framework.
- Lab: Implement multi-stage reasoning with LangChain
- Understand when to use fine-tuning.
- Prepare your data for fine-tuning.
- Fine-tune an Azure OpenAI model.
- Lab: Fine-tune an Azure OpenAI model
- Compare LLM and traditional ML evaluations.
- Describe the relationship between LLM evaluation and evaluation of entire AI systems.
- Describe generic LLM evaluation metrics like accuracy, perplexity, and toxicity.
- Describe LLM-as-a-judge for evaluation.
- Lab: Evaluate an Azure OpenAI model