Listen to this story
|
Andrew Ng, the founder of Deep Learning.AI and co-founder of Coursera, is a prominent figure in the fields of machine learning and deep learning. His courses on AI are highly regarded by people because they are well-structured and provide insights into the latest developments in the field.
Ng’s courses often include practical assignments and projects that allow one to gain real-world experience in implementing deep learning algorithms and models. These courses are regularly updated to reflect the most recent developments in deep learning.
Register for this Free AI Workshop >
Here are the latest Andrew Ng courses that will help you gain knowledge and develop skills in AI.
AI Agents in LangGraph
In this short course, you will learn how to integrate agentic search to enhance an agent’s knowledge with query-focused answers in predictable formats. You will also learn about implementing agentic memory to save state for reasoning and debugging and see how human-in-the-loop input can guide agents at key junctures.
One can build an agent from scratch and then reconstruct it with LangGraph to thoroughly understand the framework. Finally, one will develop a sophisticated essay-writing agent that incorporates all the lessons from the course.
Enroll and get more details on the course here.
AI Agentic Design Patterns with AutoGen
In this course, you will learn how to use AutoGen to implement agentic design patterns such as multi-agent collaboration, sequential and nested chat, reflection, tool use, and planning.
You will also learn to build and combine specialised agents—like researchers, planners, coders, writers, and critics—that interact to execute complex workflows, such as generating detailed financial reports, which would otherwise require extensive manual effort.
The course includes key agentic design principles with fun demonstrations. For instance, one can build a conversational chess game with two player agents that validate moves, update the board state, and engage in lively banter about the game.
Get to know more about the course and enroll here.
Introduction to On-device AI
In this course, you will deploy a real-time image segmentation model on device, learning essential steps for on-device deployment: neural Network graph capture, on-device compilation, hardware acceleration, and validation of numerical correctness.
Additionally, you will learn how quantisation can make the model 4x faster and 4x smaller, improving performance on resource-constrained edge devices. These techniques are used to deploy models on various devices, including smartphones, drones, and robots, enabling many new and creative applications.
Get more details on the course here.
Multi AI Agent Systems with Crew AI
In this course, one will learn to break down complex tasks into subtasks for multiple AI agents, each with a specialised role.
For example, creating a research report might involve researchers, writers, and quality assurance agents working together. One can define their roles, expectations, and interactions, similar to managing a team.
Additionally, explore key AI techniques such as role-playing, tool use, memory, guardrails, and cross-agent collaboration. Also, build multi-agent systems to tackle complex tasks, finding it both productive and enjoyable to design and watch these agents collaborate.
Enroll and get more details on the course here.
Building Multimodal Search and RAG
In this course, one will learn how contrastive learning works and how to add multimodality to RAG, allowing models to use diverse, relevant contexts to answer questions.
For instance, a query about a financial report might integrate text snippets, graphs, tables, and slides. Also one will learn how visual instruction tuning integrates image understanding into language models and how to build a multi-vector recommender system using Weaviate’s open-source vector database.
Get more details on the course here.
Building Agentic RAG with LlamaIndex
This covers an important shift in RAG, where instead of having the developer write explicit routines to retrieve information for the LLM context, one can build a RAG agent with access to various tools for retrieving information.
One will learn in detail about routing, where the agent uses decision-making to direct requests to multiple tools; tool use, where one can create an interface for agents to select the appropriate tool (function call) and generate the right arguments; and multi-step reasoning with tool use.
Get more details on the course here.
Quantisation In Depth
In this course, you will learn to implement various linear quantisation techniques from scratch, including asymmetric and symmetric modes. Additionally, it will quantise at different granularities (per-tensor, per-channel, per-group) to maintain performance.
You will be able to construct a quantizer to compress the dense layers of any open-source deep learning model to 8-bit precision. Finally, you will practice quantising weights into 2 bits by packing four 2-bit weights into a single 8-bit integer.
Get more details on the course here.
In Prompt Engineering for Vision Models
Here, one will learn how to prompt and fine-tune vision models for personalised image generation, editing, object detection, and segmentation. Depending on the model, prompts can be text, coordinates, or bounding boxes. Additionally one will adjust hyperparameters to shape the output.
One will learn how to work with models like Segment-Anything Model (SAM), OWL-ViT, and Stable Diffusion. Also, to fine-tune Stable Diffusion using a few images to generate personalised results, such as images of a specific person.
Learn more and enrol for the course here.
Getting Started with Mistral
In this course, you will explore Mistral’s open-source models (Mistral 7B, Mixtral 8x7B) and commercial models via API calls and Mistral AI’s Le Chat website.
Implement JSON mode to generate structured outputs for direct integration into larger software systems. Also, you can use function calling for tool use, such as calling custom Python code that queries tabular data.
Ground the LLM’s responses with external knowledge sources using RAG. Build a Mistral-powered chat interface that can reference external documents. This course will help deepen one’s prompt engineering skills.
Get more details and enrol for the course here.
Preprocessing Unstructured Data for LLM
To expand LLM’s knowledge, it’s essential to extract and normalise content from diverse formats such as PDF, PowerPoint, and HTML. This involves enriching the data with metadata to enable more powerful retrieval and reasoning.
In this course, one will learn to preprocess data for LLM applications, focusing on various document types. Also, discover how to extract and normalise documents into a common JSON format enriched with metadata for better search results.
The course covers techniques for document image analysis, including layout detection and vision transformers, to handle PDFs, images, and tables. Additionally, one will learn to build a RAG bot capable of ingesting diverse documents like PDFs, PowerPoints, and Markdown files.
Enrol and get more details on the course here.