Search

Generative AI Knowledge

Delve into our original articles, starting from the fundamentals to advanced insights.

Search

Top 10 Applications of Gen AI: Composing Long Emails

This exciting article reveals the top 10 ways GenAI transforms our daily life, offering everything from crafting dynamic reports to penning polished emails, enhancing grammar, and effortlessly translating languages. Dive in to discover how GenAI can be your ultimate, versatile sidekick, simplifying tasks and supercharging your skills!

Continue reading →

LlamaIndex – Technical Background

LlamaIndex is a highly popular open source library for developers, offering robust tools and abstractions to integrate large language models (LLMs) into software applications efficiently. It provides a unified API, essential text processing tools, and is optimized for performance. The framework supports extensibility and performance optimization, making it ideal for creating advanced features like chatbots, content generation, and data analysis tools.

Continue reading →

LlamaIndex – Design pattern utilizing Chat method of OpenAI Class (Part 1)

Working with Large Language Models (LLMs) as a developer can be challenging due to their complexity and the broad scope of their capabilities. There are numerous ways to interact with them, each with its own nuances and potential pitfalls. In this article, we demonstrate code that implements the “chat” functionality, which is fundamental to how models like ChatGPT operate. The basic process involves providing the LLM with a system instruction, which sets the context for the interaction. You then send

Continue reading →

LlamaIndex – Design pattern utilizing achat method of OpenAI Class (Part 2)

Engaging with Large Language Models (LLMs) presents various challenges and opportunities, particularly when dealing with asynchronous operations. In this part of our series, we explore how to handle asynchronous API calls within the “chat” functionality of models like ChatGPT. Asynchronous programming is essential for maintaining responsive applications, especially when integrating LLMs that may require significant processing time to generate responses.

This example demonstrates the process of setting up an asynchronous chat session with an LLM. Initially, a system instruction is

Continue reading →

LlamaIndex – Design pattern utilizing stream_chat method of OpenAI Class (Part 3 in series)

Engaging with Large Language Models (LLMs) presents various challenges and opportunities, particularly when dealing with streaming operations. In this part of our series, we explore how to handle streaming API calls within the “chat” functionality of models like ChatGPT. Streaming model allows the app developer to start using the generated text as it is getting generated rather than waiting for the entire completion to be ready.

The provided code demonstrates how to use the LlamaIndex package to interact with

Continue reading →

LlamaIndex – Design pattern utilizing astream_chat method of OpenAI Class (Part 4 in series)

Engaging with Large Language Models (LLMs) presents various challenges and opportunities, especially when implementing asynchronous streaming operations. In this part of our series, we delve into handling asynchronous stream API calls within the “chat” functionality of models like ChatGPT. Asynchronous streaming allows the app developer to handle generated text in real-time as it becomes available, rather than waiting for the entire output to be ready.

This code leverages the LlamaIndex package to interact asynchronously with OpenAI’s GPT-3.5-turbo and GPT-4 models

Continue reading →

LlamaIndex – Design pattern utilizing astream method of OpenAI Class (Part 5 in series)

Engaging with Large Language Models (LLMs) presents various challenges and opportunities, especially when implementing asynchronous streaming operations. In this part of our series, we delve into a Python model designed to interact with OpenAI’s language models asynchronously, leveraging the streaming capabilities of the LlamaIndex package.

This model enhances application responsiveness and interaction by utilizing asynchronous streaming operations. Asynchronous streaming allows the app developer to handle generated text in real-time as it becomes available, rather than waiting for the entire output to

Continue reading →

LlamaIndex – Design pattern utilizing stream method of OpenAI Class (Part 6 in series)

Engaging with Large Language Models (LLMs) presents various challenges and opportunities, especially when implementing different models in tandem to compare their outputs. In this part of our series, we delve into a Python script designed to utilize two different versions of GPT: GPT-3.5 Turbo and GPT-4, using the llama_index package.

The benefit of using multiple models is to compare their capabilities, performance, and nuances in response quality side-by-side. This approach allows developers to make informed decisions about which model best suits

Continue reading →

LlamaIndex – Design pattern utilizing acomplete method of OpenAI Class (Part 7 in series)

Engaging with Large Language Models (LLMs) presents various challenges and opportunities, especially when implementing efficient asynchronous communication with multiple models. In this part of our series, we delve into a practical demonstration of interacting with OpenAI’s GPT-3.5-turbo and GPT-4 models using the llama_index package. We use a simple method called acomplete. This takes a simple string prompt and internally converts the call to a chat completions api call to get the user the results.

Continue reading →

Subscribe to our newsletter

Join over 1,000+ other people who are mastering Generative AI in 2024. 

You will be the first to know when we publish new articles

Subscribe to our newsletter

Join over 1,000+ other people who are mastering AI in 2024