Full fine tuning is a type of fine tuning for large language models (LLMs) where all of the model’s parameters are updated during training. This is in contrast to shallow fine tuning, where only a subset of the model’s parameters are updated. Fine tuning is also known as model tuning.
Full fine tuning is typically used for more complex tasks, or when the desired performance improvement is significant. However, it can also be more computationally expensive and time-consuming than shallow fine tuning. Similar to pre-training, full fine tuning demands a significant amount of memory and computing resources to handle the immense amount of data involved in updating gradients, optimizers, and other components during training.
Reference: https://arxiv.org/pdf/2104.08691.pdf
See Also: Fine tuning with instructions