Search

In-context versus Prompt-based

In the context of language models like GPT3.5 (LLM), “in-context” refers to the idea of providing additional text or context to the model to help it generate more coherent and contextually relevant responses. When you give the model some context or a prompt, it will consider the input and generate a response based on that input. This is particularly useful when you want to have a conversation with the model or guide it in a specific direction.

Here’s a brief comparison of “in-context” vs. “prompt-based” interactions:

Aspect In-Context Prompt-Based
Use Case Continuous Conversation Single or Few Queries
How It Works Build upon prior messages Single prompt per request
Contextual Flexibility Understands conversation flow Generates one-off responses
Example Chatting with the model Asking questions or requests
Response Handling Keeps context in mind Responds to immediate input

It’s important to note that “in-context” interactions allow you to maintain a conversation with the model by providing a series of messages, and it can reference prior messages to understand the conversation’s context. In contrast, “prompt-based” interactions involve providing a single prompt or query, and the model generates a response based on that prompt alone.

<< Return to Glossary

Subscribe to our newsletter

Join over 1,000+ other people who are mastering AI in 2024

You will be the first to know when we publish new articles

Subscribe to our newsletter

Join over 1,000+ other people who are mastering AI in 2024