Listen to this story
|
Generative AI has become the talk of the town, garnering widespread attention from venture capitalists who are investing heavily in genAI startups, even without the guarantee of immediate returns. By 2030, experts anticipate the generative AI market to reach an impressive $109.3 billion, signifying a promising outlook that is captivating investors across the board.
At Google I/O 2023, Google, one of the leading players in this space, introduced a suite of generative AI features for Gmail, unveiled the advanced PaLM 2 language model, showcased Med-PaLM 2 for medical applications, and highlighted the capabilities of Bard for developers. Google also announced gen AI enhancements for Google Cloud, including Duet AI, and introduced foundation models like Codey, Imagen, and Chirp.
Recently, Google also introduced free training courses on generative AI that also come with a completion badge.
Read more: Generative AI is Having An Edison Moment
Introduction to Generative AI
This short course of 45-minutes provides an introduction to Generative AI, its applications, and its distinctions from conventional machine learning approaches. It also includes information on Google tools that can assist in the creation of your own generative AI applications.
Introduction to Large Language Models
In this course, you will get an overview of large language models (LLMs), their definitions and explaining their potential applications. It also delves into the concept of prompt engineering, which can improve the performance of LLMs. Additionally, the module introduces various Google tools that can assist in the development of personalised genAI applications.
Attention Mechanism
This module focuses on attention mechanisms in deep learning and its applications in enhancing the effectiveness of different ML tasks. It explores how attention can be used to improve machine translation, text summarisation, and question answering, among other tasks. Pre-existing knowledge of ML, deep learning, NLP, computer vision, and Python programming is necessary.
Transformer Models and BERT Model
This course introduces the key elements of the Transformer architecture, including the self-attention mechanism, and its application in constructing the BERT model. Additionally, you will understand various tasks that BERT can be employed for, including text classification, question answering, and natural language inference. Prior knowledge in intermediate ML, word embeddings, attention mechanisms, and proficiency in Python and TensorFlow are recommended.
Introduction to Image Generation
Adding to the list of interesting courses, this course is designed to teach you about diffusion models that are responsible for generating images. Throughout the course, you will learn the principles behind diffusion models, as well as how to train and utilise them on Vertex AI. Prior knowledge in ML, deep learning, convolutional neural networks (CNNs), and programming in Python is required to fully benefit from this course.
Create Image Captioning Models
This course will prepare you to build an image captioning model through deep learning. It covers the key elements of such a model, including the encoder and decoder, as well as the training and evaluation processes. Upon completion of this module, you will possess the skills to develop your own image captioning models and utilize them to generate captions for images. Prior familiarity with ML, deep learning, NLP, computer vision, and Python programming is advantageous.
Encoder-Decoder Architecture
In this course, you will understand encoder-decoder architecture, a widely used and effective ML framework for tasks involving sequences, such as translating languages, summarising text, and answering questions. The module covers the key elements of the encoder-decoder architecture and provides instructions on training and deploying these models. To make the most of this module, it is essential to have a strong foundation in Python and familiarity with TensorFlow.
Introduction to Responsible AI
Building products with a focus on ethical AI is important. So this course by Google where you will understand the significance of responsible AI, as well as how Google incorporates responsible AI into its products is a need of the hour.
Introduction to Generative AI Studio
Google introduced a new genAI Studio at Google I/O. So this course gives an introduction to Generative AI Studio, a tool within Vertex AI and how to use it effectively. Generative AI Studio allows you to create and personalise generative AI models, enabling you to incorporate their abilities into your own applications.
Generative AI Explorer (Vertex AI)
The Generative AI Explorer – Vertex AI Quest is a series of labs on using generative AI on Google Cloud. It covers the Vertex AI PaLM API family, including models like text-bison, chat-bison, and text embedding-gecko. You’ll learn about prompt design, best practices, and applications like text classification and summarisation. The quest consists of four modules: foundation models, model tuning, PaLM API, and Text Embedding API.
Read more: AI Cloud Wars: Azure AI vs Vertex AI