Generative AI for Beginners
A comprehensive curriculum exploring the fundamentals of Generative AI, Large Language Models, prompt engineering, and the development of AI-powered applications using tools like Azure OpenAI and the Power Platform.
Lessons
Course Overview
📚 Content Summary
A comprehensive curriculum exploring the fundamentals of Generative AI, Large Language Models, prompt engineering, and the development of AI-powered applications using tools like Azure OpenAI and the Power Platform.
Master the fundamentals of Generative AI and build intelligent applications from scratch.
Acknowledgments: Microsoft, Azure, and OpenAI.
🎯 Learning Objectives
- Explain the mechanical inner workings of LLMs, including tokenization, the attention mechanism, and non-deterministic output.
- Compare various LLM categories (Foundation Models, Open-source vs. Proprietary, and Encoder/Decoder architectures) to select the right tool for a business scenario.
- Evaluate strategies for improving model results, specifically choosing between Prompt Engineering, Retrieval Augmented Generation (RAG), and Fine-tuning.
- Define Prompt Engineering and explain its role as the primary programming interface for generative AI.
- Differentiate between Base LLMs and Instruction-Tuned LLMs and how they process tokens.
- Construct complex prompts using instructions, primary content, cues, and templates.
- Construct and configure text generation applications using the
openailibrary, managing environment variables, and adjusting output variety via temperature. - Differentiate between rule-based chatbots and context-aware generative AI applications while implementing Microsoft’s Six Principles of Responsible AI.
- Execute semantic search by converting text into embeddings (vectors) and applying cosine similarity to find relevant content beyond simple keyword matching.
- Build and configure image generation applications while implementing "meta prompts" to define content boundaries and safety.
🔹 Lesson 1: Foundations and Ethics of Generative AI
Overview: This lesson provides a comprehensive introduction to Generative AI, tracing its evolution from statistical machine learning to modern Transformer-based Large Language Models (LLMs). Learners will explore how these models function through tokenization and probability, how to select and optimize different model types (Open-source vs. Proprietary), and the critical framework for applying Responsible AI principles to mitigate risks like hallucinations and bias.
Learning Outcomes:
- Explain the mechanical inner workings of LLMs, including tokenization, the attention mechanism, and non-deterministic output.
- Compare various LLM categories (Foundation Models, Open-source vs. Proprietary, and Encoder/Decoder architectures) to select the right tool for a business scenario.
- Evaluate strategies for improving model results, specifically choosing between Prompt Engineering, Retrieval Augmented Generation (RAG), and Fine-tuning.
🔹 Lesson 2: The Art and Science of Prompt Engineering
Overview: This lesson explores Prompt Engineering (PE) as the process of designing and optimizing text inputs (prompts) to guide Large Language Models (LLMs) toward producing high-quality, consistent responses. Students will move from understanding the foundational mechanics of tokenization and model types to applying advanced techniques like Chain-of-thought and Maieutic prompting to mitigate model limitations such as stochasticity and fabrication.
Learning Outcomes:
- Define Prompt Engineering and explain its role as the primary programming interface for generative AI.
- Differentiate between Base LLMs and Instruction-Tuned LLMs and how they process tokens.
- Construct complex prompts using instructions, primary content, cues, and templates.
🔹 Lesson 3: Developing Core AI Applications
Overview: This lesson explores the practical development of AI-driven tools, focusing on text generation, context-aware chat interfaces, and semantic search applications. Learners transition from basic API integration and parameter tuning (temperature and tokens) to implementing advanced features like text embeddings, cosine similarity, and responsible AI frameworks.
Learning Outcomes:
- Construct and configure text generation applications using the
openailibrary, managing environment variables, and adjusting output variety via temperature. - Differentiate between rule-based chatbots and context-aware generative AI applications while implementing Microsoft’s Six Principles of Responsible AI.
- Execute semantic search by converting text into embeddings (vectors) and applying cosine similarity to find relevant content beyond simple keyword matching.
🔹 Lesson 4: Low-Code and Integrated AI Solutions
Overview: This material covers three advanced pillars of AI implementation: building image generation applications using models like DALL-E and Midjourney, developing low-code solutions via the Microsoft Power Platform, and enhancing LLM capabilities through "function calling." The content focuses on practical deployment, from Python-based API integrations to natural language-driven app development and connecting AI to external data sources.
Learning Outcomes:
- Build and configure image generation applications while implementing "meta prompts" to define content boundaries and safety.
- Design low-code AI apps and automated workflows using Copilot, Dataverse, and AI Builder within the Power Platform.
- Implement function calling to ensure consistent structured data output (JSON) and integrate LLMs with external APIs.
🔹 Lesson 5: UX, Security, and Application Lifecycle
Overview: This lesson covers the critical intersection of user experience (UX), security protocols, and the operational lifecycle specifically for Generative AI applications. It explores how to build trust through explainability, identifies unique AI security threats like data poisoning and prompt injection, and outlines the transition from traditional MLOps to LLMOps for managing the application lifecycle.
Learning Outcomes:
- Design AI interfaces that promote trust and transparency through explainability and user control.
- Identify and mitigate AI-specific security risks including data poisoning, prompt injection, and supply chain vulnerabilities.
- Differentiate between MLOps and LLMOps and explain the stages of the Generative AI application lifecycle (Ideating, Building, Operationalizing).
🔹 Lesson 6: Advanced Retrieval and Agentic Systems
Overview: This lesson covers the transition from basic LLM interactions to sophisticated systems that utilize Retrieval Augmented Generation (RAG) and autonomous AI Agents. Learners will explore how to ground models with private data using vector databases and how to extend LLM capabilities through agentic frameworks that can plan, use tools, and interact with other agents.
Learning Outcomes:
- Explain the technical workflow of RAG, including chunking, embedding, and semantic retrieval.
- Compare and select appropriate open-source models (Llama 2, Mistral, Falcon) based on cost, customization, and performance.
- Differentiate between major AI Agent frameworks (LangChain, AutoGen, TaskWeaver, JARVIS) and their specific use cases.
🔹 Lesson 7: Model Fine-Tuning and Specialized Architectures
Overview: This lesson covers the transition from general prompt engineering to model optimization through supervised fine-tuning and the use of specialized Small Language Models (SLMs). It provides a comparative analysis of the Microsoft Phi, Mistral, and Meta Llama model families, detailing their architectural trade-offs in size, compute requirements, and multimodality for deployment across cloud and edge environments.
Learning Outcomes:
- Define fine-tuning and determine when to use it versus prompt engineering or Retrieval-Augmented Generation (RAG).
- Contrast the characteristics of Large Language Models (LLMs) and Small Language Models (SLMs) regarding size, comprehension, and inference speed.
- Identify the unique features and use cases for the Phi-3.5, Mistral (Large/Small/NeMo), and Llama (3.1/3.2) model families.