Search

Parameter Efficient Tuning Methods (PETM)

Parameter Efficient Tuning Methods (PETM) generally refers to a range of methods used in Natural Language Processing (NLP) to improve the performance of pre-trained language models (LLMs) on specific downstream tasks, while minimizing the number of parameters that need to be fine-tuned.

PETM is a broader term that encompasses various specific methods, such as LoRA (Low-Rank Adaptation) and PACT (Parameterized Activation Control).

Parameter Efficient Fine Tuning (PEFT) is one method of PETM. PEFT is sometimes used specifically to refer to LoRA, which is one of the most popular and effective PETM techniques.

Users of LLMs often try to avoid fully fine-tuning these models due to high costs and the need for extensive human resources. Instead, if instruction fine-tuning alone doesn’t meet their needs, they may use PETM. PETM allows users to adapt an LLM to their specific data without having to alter the entire model. There are two main approaches to partial fine-tuning under PETM:

  1. Adding a few new layers to the LLM. These additional layers are adjusted (tuned) to the user’s requirements and can be easily attached or removed when the model is used (inference time).
  2. Making selective changes to the existing layers of the LLM to better suit the user’s specific needs.

See Also: Prompt Tuning

<< Return to Glossary

Subscribe to our newsletter

Join over 1,000+ other people who are mastering AI in 2024

You will be the first to know when we publish new articles

Subscribe to our newsletter

Join over 1,000+ other people who are mastering AI in 2024