Survey

Please, rate the engine


 










Ads













Ads


Warezcrackfull.com » Tutorial » Generative AI Architectures with LLM Prompt RAG Vector DB

Generative AI Architectures with LLM Prompt RAG Vector DB

Author: warezcrackfull on 24-11-2024, 06:06, Views: 0

Generative AI Architectures with LLM Prompt RAG Vector DB
Free Download Generative AI Architectures with LLM Prompt RAG Vector DB
Published 11/2024
MP4 | Video: h264, 1920x1080 | Audio: AAC, 44.1 KHz
Language: English | Size: 2.04 GB | Duration: 5h 27m
Design and Integrate AI-Powered S/LLMs into Enterprise Apps using Prompt Engineering, RAG, Fine-Tuning and Vector DBs


What you'll learn
Generative AI Model Architectures (Types of Generative AI Models)
Transformer Architecture: Attention is All you Need
Large Language Models (LLMs) Architectures
Capabilities of LLMs: Text Generation, Summarization, Q&A, Classification, Sentiment Analysis, Embedding Semantic Search, Code Generation
Generate Text with ChatGPT: Understand Capabilities and Limitations of LLMs (Hands-on)
Function Calling and Structured Outputs in Large Language Models (LLMs)
LLM Providers: OpenAI, Meta AI, Anthropic, Hugging Face, Microsoft, Google and Mistral AI
LLM Models: OpenAI ChatGPT, Meta Llama, Anthropic Claude, Google Gemini, Mistral Mixral, xAI Grok
SLM Models: OpenAI ChatGPT 4o mini, Meta Llama 3.2 mini, Google Gemma, Microsoft Phi 3.5
How to Choose LLM Models: Quality, Speed, Price, Latency and Context Window
Interacting Different LLMs with Chat UI: ChatGPT, LLama, Mixtral, Phi3
Installing and Running Llama and Gemma Models Using Ollama
Modernizing Enterprise Apps with AI-Powered LLM Capabilities
Designing the 'EShop Support App' with AI-Powered LLM Capabilities
Advanced Prompting Techniques: Zero-shot, One-shot, Few-shot, COT
Design Advanced Prompts for Ticket Detail Page in EShop Support App w/ Q&A Chat and RAG
The RAG Architecture: Ingestion with Embeddings and Vector Search
E2E Workflow of a Retrieval-Augmented Generation (RAG) - The RAG Workflow
End-to-End RAG Example for EShop Customer Support using OpenAI Playground
Fine-Tuning Methods: Full, Parameter-Efficient Fine-Tuning (PEFT), LoRA, Transfer
End-to-End Fine-Tuning a LLM for EShop Customer Support using OpenAI Playground
Choosing the Right Optimization – Prompt Engineering, RAG, and Fine-Tuning
Requirements
Basics of Software Architectures
Description
In this course, you'll learn how to Design Generative AI Architectures with integrating AI-Powered S/LLMs into EShop Support Enterprise Applications using Prompt Engineering, RAG, Fine-tuning and Vector DBs.We will design Generative AI Architectures with below components;Small and Large Language Models (S/LLMs)Prompt EngineeringRetrieval Augmented Generation (RAG)Fine-TuningVector DatabasesWe start with the basics and progressively dive deeper into each topic. We'll also follow LLM Augmentation Flow is a powerful framework that augments LLM results following the Prompt Engineering, RAG and Fine-Tuning.Large Language Models (LLMs) module;How Large Language Models (LLMs) works?Capabilities of LLMs: Text Generation, Summarization, Q&A, Classification, Sentiment Analysis, Embedding Semantic Search, Code GenerationGenerate Text with ChatGPT: Understand Capabilities and Limitations of LLMs (Hands-on)Function Calling and Structured Output in Large Language Models (LLMs)LLM Models: OpenAI ChatGPT, Meta Llama, Anthropic Claude, Google Gemini, Mistral Mixral, xAI GrokSLM Models: OpenAI ChatGPT 4o mini, Meta Llama 3.2 mini, Google Gemma, Microsoft Phi 3.5Interacting Different LLMs with Chat UI: ChatGPT, LLama, Mixtral, Phi3Interacting OpenAI Chat Completions Endpoint with CodingInstalling and Running Llama and Gemma Models Using Ollama to run LLMs locallyModernizing and Design EShop Support Enterprise Apps with AI-Powered LLM CapabilitiesPrompt Engineering module;Steps of Designing Effective Prompts: Iterate, Evaluate and TemplatizeAdvanced Prompting Techniques: Zero-shot, One-shot, Few-shot, Chain-of-Thought, Instruction and Role-basedDesign Advanced Prompts for EShop Support – Classification, Sentiment Analysis, Summarization, Q&A Chat, and Response Text Generation Design Advanced Prompts for Ticket Detail Page in EShop Support App w/ Q&A Chat and RAGRetrieval-Augmented Generation (RAG) module;The RAG Architecture Part 1: Ingestion with Embeddings and Vector SearchThe RAG Architecture Part 2: Retrieval with Reranking and Context Query PromptsThe RAG Architecture Part 3: Generation with Generator and OutputE2E Workflow of a Retrieval-Augmented Generation (RAG) - The RAG WorkflowDesign EShop Customer Support using RAGEnd-to-End RAG Example for EShop Customer Support using OpenAI PlaygroundFine-Tuning module;Fine-Tuning WorkflowFine-Tuning Methods: Full, Parameter-Efficient Fine-Tuning (PEFT), LoRA, TransferDesign EShop Customer Support Using Fine-TuningEnd-to-End Fine-Tuning a LLM for EShop Customer Support using OpenAI PlaygroundLastly, we will discussChoosing the Right Optimization – Prompt Engineering, RAG, and Fine-TuningThis course is more than just learning Generative AI, it's a deep dive into the world of how to design Advanced AI solutions by integrating LLM architectures into Enterprise applications. You'll get hands-on experience designing a complete EShop Customer Support application, including LLM capabilities like Summarization, Q&A, Classification, Sentiment Analysis, Embedding Semantic Search, Code Generation.
Overview
Section 1: Introduction
Lecture 1 Introduction
Lecture 2 Tools and Resources for the Course - Course Slides
Lecture 3 Course Project: EShop Customer Support with AI-Powered Capabilities using LLMs
Section 2: What is Generative AI ?
Lecture 4 Evolution of AI: AI, Machine Learning, Deep Learning and Generative AI
Lecture 5 What is Generative AI ?
Lecture 6 How Generative AI works ?
Lecture 7 Generative AI Model Architectures (Types of Generative AI Models)
Lecture 8 Transformer Architecture: Attention is All you Need
Section 3: What are Large Language Models (LLMs) ?
Lecture 9 What are Large Language Models (LLMs) ?
Lecture 10 How Large Language Models (LLMs) works?
Lecture 11 What is Token And Tokenization ?
Lecture 12 How LLMs Use Tokens
Lecture 13 Capabilities of LLMs: Text Generation, Summarization, Q&A, Classification
Lecture 14 LLM Use Cases and Real-World Applications
Lecture 15 Limitations of Large Language Models (LLMs)
Lecture 16 Generate Text with ChatGPT: Understand Capabilities and Limitations of LLMs
Lecture 17 LLM Settings: Temperature, Max Tokens, Stop sequences, Top P, Frequency Penalty
Lecture 18 Function Calling in Large Language Models (LLMs)
Lecture 19 Structured Output in Large Language Models (LLMs)
Lecture 20 What are Small Language Models (SLMs) ? Use Cases / How / Why / When
Section 4: Exploring and Running Different LLMs w/ HuggingFace and Ollama
Lecture 21 LLM Providers: OpenAI, Meta AI, Anthropic, Hugging Face, Microsoft, Google
Lecture 22 LLM Models: OpenAI ChatGPT, Meta Llama, Anthropic Claude, Google Gemini, Mistral
Lecture 23 SLM Models: OpenAI ChatGPT 4o mini, Meta Llama 3.2 mini, Gemma, Phi-3
Lecture 24 How to Choose LLM Models: Quality, Speed, Price, Latency and Context Window
Lecture 25 Open Source vs Proprietary Models
Lecture 26 Hugging Face - The GitHub of Machine Learning Models
Lecture 27 LLM Interaction Types: No-Code (ChatUI) or With-Code (API Keys)
Lecture 28 Interacting Different LLMs with Chat UI: ChatGPT, LLama, Mixtral, Phi3
Lecture 29 Interacting OpenAI Chat Completions Endpoint with Coding
Lecture 30 Ollama – Run LLMs Locally
Lecture 31 Installing and Running Llama and Gemma Models Using Ollama
Lecture 32 Ollama integration using Semantic Kernel and C# with coding
Lecture 33 Modernizing Enterprise Apps with AI-Powered LLM Capabilities
Lecture 34 Designing the 'EShop Support App' with AI-Powered LLM Capabilities
Lecture 35 LLMs Augmentation Flow: Prompt Engineering -> RAG -> Fine tunning -> Trained
Section 5: Prompt Engineering
Lecture 36 What is Prompt ?
Lecture 37 Elements and Roles of a Prompt
Lecture 38 What is Prompt Engineering ?
Lecture 39 Steps of Designing Effective Prompts: Iterate, Evaluate and Templatize
Lecture 40 Advanced Prompting Techniques
Lecture 41 Zero-Shot Prompting
Lecture 42 One-shot Prompting
Lecture 43 Few-shot Prompting
Lecture 44 Chain-of-Thought Prompting
Lecture 45 Instruction-based and Role-based Prompting
Lecture 46 Design Advanced Prompts for EShop Support – Classification, Sentiment Analysis
Lecture 47 Design Advanced Prompts for Ticket Detail Page in EShop Support App w/ Q&A Chat
Lecture 48 Test Prompts for Eshop Support Customer Ticket w/ Playground
Section 6: Retrieval-Augmented Generation (RAG)
Lecture 49 What is Retrieval-Augmented Generation (RAG) ?
Lecture 50 Why Need Retrieval-Augmented Generation (RAG) ? Why is RAG Important?
Lecture 51 How Does Retrieval-Augmented Generation (RAG) Work?
Lecture 52 The RAG Architecture Part 1: Ingestion with Embeddings and Vector Search
Lecture 53 The RAG Architecture Part 2: Retrieval with Reranking and Context Query Prompts
Lecture 54 The RAG Architecture Part 3: Generation with Generator and Output
Lecture 55 E2E Workflow of a Retrieval-Augmented Generation (RAG) - The RAG Workflow
Lecture 56 Applications Use Cases of RAG
Lecture 57 Challenges and Key Considerations of Using RAG -- Retrieval-Augmented Generation
Lecture 58 Design EShop Customer Support using RAG
Lecture 59 End-to-End RAG Example for EShop Customer Support using OpenAI Playground
Section 7: Fine-Tuning LLMs
Lecture 60 What is Fine-Tuning ?
Lecture 61 Why Need Fine-Tuning ?
Lecture 62 When to Use Fine-Tuning ?
Lecture 63 How Does Fine-Tuning Work?
Lecture 64 Fine-Tuning Methods: Full, Parameter-Efficient Fine-Tuning (PEFT), LoRA
Lecture 65 Applications & Use Cases of Fine-Tuning
Lecture 66 Challenges and Key Considerations of Fine-Tuning
Lecture 67 Design EShop Customer Support Using Fine-Tuning
Lecture 68 End-to-End Fine-Tuning a LLM for EShop Customer Support using OpenAI Playground
Section 8: Choosing the Right Optimization – Prompt Engineering, RAG, and Fine-Tuning
Lecture 69 Comparison of Prompt Engineering, RAG, and Fine-Tuning
Lecture 70 Choosing the Right Optimization – Prompt Engineering, RAG, and Fine-Tuning
Lecture 71 Training Own Model for LLM Optimization
Lecture 72 Thanks
Beginner to integrate AI-Powered LLMs into Enterprise Apps

Homepage
https://www.udemy.com/course/generative-ai-architectures-with-llm-prompt-rag-vector-db/




Buy Premium From My Links To Get Resumable Support,Max Speed & Support Me


No Password - Links are Interchangeable

  •      Views 0  |  Comments 0
    Comments
    Your name:*
    E-Mail:
            
    Enter the code: *
    reload, if the code cannot be seen
    New full version warez downloads
    All rights by WarezCrackFull.com 2024 Sitemap