Generative Pre-trained Transformers (GPT)

Brought by: Coursera

Overview

Large language models such as GPT-3.5, which powers ChatGPT, are changing how humans interact with computers and how computers can process text. This course will introduce the fundamental ideas of natural language processing and language modelling that underpin these large language models. We will explore the basics of how language models work, and the specifics of how newer neural-based approaches are built. We will examine the key innovations that have enabled Transformer-based large language models to become dominant in solving various language tasks. Finally, we will examine the challenges in applying these large language models to various problems including the ethical problems involved in their construction and use.

Through hands-on labs, we will learn about the building blocks of Transformers and apply them for generating new text. These Python exercises step you through the process of applying a smaller language model and understanding how it can be evaluated and applied to various problems. Regular practice quizzes will help reinforce the knowledge and prepare you for the graded assessments.

Syllabus

  • Language Modeling
    • This module introduces the concept of language modelling, which is the foundation of models like GPT.
  • Transformers and GPT
    • This module describes the technical background for neural language models and an overview of how they are used to generate text.
  • Applications and Implications
    • This module discusses considerations that are necessary when using GPT and similar models in real-world contexts, specifically discussing the risks of using these models and approaches to mitigating these risks.

Taught by

Mary Ellen Foster, Sean MacAvaney and Jake Lever

Generative Pre-trained Transformers (GPT)
Go to course

Generative Pre-trained Transformers (GPT)

Brought by: Coursera

  • Coursera
  • Free
  • English
  • Certificate Available
  • Available at any time
  • intermediate
  • English