Build a Generative AI Chatbot with OpenAI and Streamlit

Generative AI models like OpenAI’s GPT-3.5 and GPT-4 have revolutionized the way we interact with technology. They made it easy to create chatbots capable of performing diverse tasks—from answering questions and providing personalized recommendations to engaging in creative writing and code generation. This tutorial will guide you through building a simple web-based chatbot using Python, OpenAI’s Chat Completion API, and Streamlit.

Build a Generative AI Chatbot with OpenAI and Streamlit
Build a Generative AI Chatbot with OpenAI and Streamlit

Streamlit is an open-source Python library that simplifies the creation of data applications without requiring extensive frontend or web framework knowledge. In this tutorial, you will learn how to create an interactive chatbot that responds to user queries in real-time via a browser interface.

Prerequisites

Before you begin, ensure you have the following:

  1. Python Environment: Install Python 3.9 or later. If you’re using a Python environment manager like venv or conda, create a new environment.
  2. Install Dependencies: You’ll need the following libraries:
   pip install openai streamlit
  1. OpenAI API Key: Sign up for an OpenAI account at OpenAI Platform to retrieve your API key. For security, set it as an environment variable:
   export OPENAI_API_KEY="your_openai_api_key_here"

(Embedding it directly in your code is not recommended for production.)

Application Architecture

High-Level Overview

  • User Interface (Frontend): Streamlit provides a web UI for user interaction with the chatbot, featuring a text input box for messages and displaying conversation history.
  • Backend / Logic: Upon submitting a message, the app calls the OpenAI API’s Chat Completion endpoint with the entire conversation context to generate responses.
  • Session State Management: Streamlit re-runs the script with each user interaction. To maintain conversation history across these runs, we will use st.session_state.

Conversation Flow

  1. Initialization: Start with a “system” message that defines the assistant’s role:
   {"role": "system", "content": "You are a helpful assistant."}
  1. User Input: The user types a message into a text field and submits it.
  2. API Call: The app sends the entire conversation history to OpenAI’s Chat Completion endpoint.
  3. Response Display: The assistant’s response is appended to the conversation history displayed in the web interface.

Handling Context and Token Limits

As conversations grow longer, you may encounter token limits. Consider these strategies:

  • Truncate old messages to keep prompts concise.
  • Summarize earlier parts of the conversation periodically.
  • Implement vector-based memory retrieval for advanced use cases.

Example Code

Create a new Python file named chatbot_app.py and paste the following code:

import os
import openai
import streamlit as st

# Set your OpenAI API key
openai.api_key = os.getenv("OPENAI_API_KEY", "your_openai_api_key_here")

st.set_page_config(page_title="Generative AI Chatbot")

# Initialize session state for conversation if not present
if "messages" not in st.session_state:
    st.session_state["messages"] = [
        {"role": "system", "content": "You are a helpful assistant."}
    ]

st.title(" Generative AI Chatbot")
st.write("Welcome! Ask the assistant anything and receive a response.")

# User Input
user_input = st.text_input("You:", placeholder="Type your question here...")

# Function to call OpenAI's Chat Completion API
def get_openai_response(messages, model="gpt-3.5-turbo", temperature=0.7):
    response = openai.ChatCompletion.create(
        model=model,
        messages=messages,
        temperature=temperature
    )
    return response.choices[0].message["content"].strip()

# When user submits a new message
if user_input:
    # Add user message to state
    st.session_state.messages.append({"role": "user", "content": user_input})

    # Get response from OpenAI
    assistant_response = get_openai_response(st.session_state.messages)

    # Add assistant response to state
    st.session_state.messages.append({"role": "assistant", "content": assistant_response})

# Display conversation history
for message in st.session_state.messages:
    if message["role"] == "user":
        st.markdown(f"**You:** {message['content']}")
    elif message["role"] == "assistant":
        st.markdown(f"**AI:** {message['content']}")

How It Works

  • Session State: st.session_state maintains variables across user interactions, storing messages throughout the conversation.
  • OpenAI API Call: Each query sends the complete conversation history to OpenAI’s ChatCompletion.create() method for context-aware responses.
  • User Interface: The interface includes a title, instructions, an input box, and renders the conversation history dynamically.
  • Prompt Engineering: Modify the system message to adjust the assistant’s personality or expertise—for example, “You are a Python coding assistant.”

Running the App

Run your application with this command:

streamlit run chatbot_app.py

After running this command, Streamlit will open a browser window (or provide a URL like http://localhost:8501/) where you can interact with your chatbot.

Enhancements and Next Steps

To further improve your chatbot:

  • Improved Prompting: Experiment with different system messages for varied tones or roles.
  • UI Customization: Implement sidebars for adjusting parameters like temperature or providing instructions.
  • Memory and Summarization: Summarize older conversations or store context in a vector database for retrieval.
  • Caching and Rate Limits: Cache responses for repeated queries and manage rate limits for multiple users.
  • Security and Authentication: Secure your API keys and consider implementing user authentication if deploying publicly.

Conclusion

By combining Streamlit with OpenAI’s Chat Completion API, you can quickly develop an engaging conversational AI application. This foundational setup can be tailored for various use cases—ranging from customer support bots to interactive educational tools—empowering you to create responsive chat experiences leveraging advanced language models.

Leave a Comment

Comments

No comments yet. Why don’t you start the discussion?

    Comments