Search

In-context learning (ICL) versus Instruction fine tuning (IFT)

Let us briefly differentiate between these two concepts:

In-Context Learning: This refers to the ability of a language model to understand and adapt to the context provided within a single interaction. The model uses the immediate context (the text provided in the current conversation or prompt) to generate relevant and coherent responses. It does not have any long-term memory and does not learn from the interaction for future use. Each response is independent, and the model resets with each new prompt. Zero-shot, one-shot, few-shot learning are examples of in context learning.

Instruction Fine-Tuning (IFT): Instruction fine-tuning is a type of full fine tuning where the model weights are updated. It is a more recent approach where a model is specifically trained to follow instructions or prompts in a more reliable and effective way. It is a technique used to enhance the capabilities of a pre-trained language model. It involves fine-tuning the model on a wide range of datasets, each described with natural language instructions. This process aims to improve the model’s ability to understand and execute tasks as instructed in a zero-shot manner, i.e., correctly performing tasks it hasn’t explicitly seen during training. IFT focuses on training the model to follow instructions more accurately, thereby enhancing its adaptability and performance on a variety of unseen tasks.

FLAN (Fine-tuned LAnguage Net) was the first one to use IFT. This process fine-tunes a model not to solve a specific task but to be more amenable to solving a range of NLP tasks in general. It was a novel approach at scale, showing significant improvements in the model’s ability to generalize and perform unseen tasks, marking a notable advancement in language model training.

For detailed information, you can refer to the paper: https://ar5iv.org/pdf/2109.01652.pdf.

See Also: Full fine tuning and shallow fine tuning

<< Return to Glossary

Subscribe to our newsletter

Join over 1,000+ other people who are mastering AI in 2024

You will be the first to know when we publish new articles

Subscribe to our newsletter

Join over 1,000+ other people who are mastering AI in 2024