In-context Learning (with examples)

The LLM learns in-context from the examples on how to respond. Typically, one to five examples (shots) are enough to improve the quality of responses. Including too many examples can cause the model to over-fit the data and reduce the quality of responses.

Similar to classical model training, the quality and distribution of the examples is very important. Pick examples that are representative of the scenarios that you need the model to learn, and keep the distribution of the examples (e.g. number of examples per class in the case of classification) aligned with your actual distribution.

Zero-shot prompt

Below is an example of zero-shot prompting, where you don’t provide any examples to the LLM within the prompt itself.

Decide whether a Tweet’s sentiment is positive, neutral, or negative.

Tweet: I loved the new YouTube video you made!

Sentiment:

​Response from ChatGPT

Tweet: I loved the new YouTube video you made!

Sentiment: Positive

One-shot prompt

Below is an example of one-shot prompting, where you provide one example to the LLM within the prompt to give some guidance on what type of response you want.

Decide whether a Tweet’s sentiment is positive, neutral, or negative.

Tweet: I loved the new YouTube video you made!

Sentiment: positive

Tweet: That was awful. Super boring 😠

Sentiment:

Response from ChatGPT

Based on the content of the tweets and their tone, the sentiment of the second tweet can be determined:

Tweet: That was awful. Super boring 😠

Sentiment: negative

This tweet uses strongly negative language (“awful”, “boring”) and an angry emoji, indicating a negative sentiment.

Few-shot prompt

Below is an example of few-shot prompting, where you provide one example to the LLM within the prompt to give some guidance on what type of response you want.

Decide whether a Tweet’s sentiment is positive, neutral, or negative.

Tweet: I loved the new YouTube video you made!

Sentiment: positive

Tweet: That was awful. Super boring 😠

Sentiment: negative

Tweet: Something surprised me about this video – it was actually original. It was not the same old recycled stuff that I always see. Watch it – you will not regret it.

Sentiment:

Based on the context and tone of the third tweet, the sentiment can be classified as positive. The tweet starts with a note of surprise about the originality of the video, which is a positive aspect. It further distances this video from “the same old recycled stuff,” suggesting a favorable comparison. The recommendation to watch it with an assurance of no regrets also implies a positive sentiment.

Therefore, the completed prompt would be:

Tweet: Something surprised me about this video – it was actually original. It was not the same old recycled stuff that I always see. Watch it – you will not regret it.

Sentiment: positive

Choosing between zero-shot, one-shot, few-shot prompting methods

Which prompt technique to use will solely depend on your goal. The zero-shot prompts are more open-ended and can give you creative answers, while one-shot and few-shot prompts teach the model how to behave so you can get more predictable answers that are consistent with the examples provided.

See Also: In-context learning

<< Return to Glossary

Subscribe to our newsletter

Join over 1,000+ other people who are mastering AI in 2024

You will be the first to know when we publish new articles

Subscribe to our newsletter

Join over 1,000+ other people who are mastering AI in 2024