Achieve your goals faster with our ✨NEW✨ Personalized Learning Plan - select your content, set your own timeline and we will help you stay on track. Log in and Head to My Learning to get started! Learn more

Offered By: IBMSkillsNetwork

Parameter efficient fine-tuning (PEFT): Adapters in PyTorch

Apply parameter-efficient fine-tuning (PEFT) in PyTorch using adapters! This hands-on project walks you through fine-tuning a transformer-based neural network using a bottleneck adapter that improves the efficiency of training and storage. Upon completion, you will have enhanced your skills in incorporating adapters and fine-tuning pre-existing models, and you will have gained insights into the advantages and disadvantages of different fine-tuning methods.

Continue reading

Guided Project

Artificial Intelligence

67 Enrolled
4.7
(11 Reviews)

At a Glance

Apply parameter-efficient fine-tuning (PEFT) in PyTorch using adapters! This hands-on project walks you through fine-tuning a transformer-based neural network using a bottleneck adapter that improves the efficiency of training and storage. Upon completion, you will have enhanced your skills in incorporating adapters and fine-tuning pre-existing models, and you will have gained insights into the advantages and disadvantages of different fine-tuning methods.

A look at the project ahead

Adapter-based parameter efficient fine-tuning (PEFT) techniques are widely used for fine-tuning neural networks due to their efficiency. Here’s why:
  1. Efficient training: During the training process, a significantly smaller number of weights must be updated. This leads to a more efficient training process compared to full fine-tuning.
  2. Efficient storage: The models can be stored compactly by only saving the weights for the adapter's layers and the output layer. This is because the weights in the original model, except for the output layer, remain unchanged.
  3. Reduced overfitting: Adapter-based PEFT techniques, which preserve the original weights, are less prone to overfitting. This is largely due to the fact that the adapted model retains a substantial part of the original model’s structure.
In this hands-on project, you gain an understanding of how adapters function. You’ll apply an adapter to a transformer-based neural network. The adapter that you use, called a bottleneck adapter, includes a non-linear activation function, ensuring that the resulting model isn’t just a linear combination of the original model’s weights.

Learning objectives

Upon completion of this project, you have the ability to:
  • Understand how adapters work
  • Apply adapters to linear layers in a neural network
  • Train a neural network in a parameter efficient way by training just the adapted layers

What you'll need

For this project, you need an intermediate level of proficiency in Python, PyTorch, and deep learning. Additionally, the only equipment that you need is a computer equipped with a modern browser, such as the latest versions of Chrome, Edge, Firefox, or Safari.

Estimated Effort

45 Minutes

Level

Intermediate

Skills You Will Learn

Artificial Intelligence, Deep Learning, Generative AI, NLP, Python, PyTorch

Language

English

Course Code

GPXX0G24EN

Tell Your Friends!

Saved this page to your clipboard!

Have questions or need support? Chat with me 😊