Migrating off ConversationBufferMemory or ConversationStringBufferMemory
ConversationBufferMemory and ConversationStringBufferMemory were used to keep track of a conversation between a human and an ai asstistant without any additional processing.
The ConversationStringBufferMemory
is equivalent to ConversationBufferMemory
but was targeting LLMs that were not chat models.
The methods for handling conversation history using existing modern primitives are:
- Using LangGraph persistence along with appropriate processing of the message history
- Using LCEL with RunnableWithMessageHistory combined with appropriate processing of the message history.
Most users will find LangGraph persistence both easier to use and configure than the equivalent LCEL, especially for more complex use cases.
Set upβ
%%capture --no-stderr
%pip install --upgrade --quiet langchain-openai langchain
import os
from getpass import getpass
if "OPENAI_API_KEY" not in os.environ:
os.environ["OPENAI_API_KEY"] = getpass()
Usage with LLMChain / ConversationChainβ
This section shows how to migrate off ConversationBufferMemory
or ConversationStringBufferMemory
that's used together with either an LLMChain
or a ConversationChain
.
Legacyβ
Below is example usage of ConversationBufferMemory
with an LLMChain
or an equivalent ConversationChain
.
Details
from langchain.chains import LLMChain
from langchain.memory import ConversationBufferMemory
from langchain_core.messages import SystemMessage
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.prompts.chat import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
)
from langchain_openai import ChatOpenAI
prompt = ChatPromptTemplate(
[
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{text}"),
]
)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
legacy_chain = LLMChain(
llm=ChatOpenAI(),
prompt=prompt,
memory=memory,
)
legacy_result = legacy_chain.invoke({"text": "my name is bob"})
print(legacy_result)
legacy_result = legacy_chain.invoke({"text": "what was my name"})
{'text': 'Hello Bob! How can I assist you today?', 'chat_history': [HumanMessage(content='my name is bob', additional_kwargs={}, response_metadata={}), AIMessage(content='Hello Bob! How can I assist you today?', additional_kwargs={}, response_metadata={})]}
legacy_result["text"]
'Your name is Bob. How can I assist you today, Bob?'
Note that there is no support for separating conversation threads in a single memory object
LangGraphβ
The example below shows how to use LangGraph to implement a ConversationChain
or LLMChain
with ConversationBufferMemory
.
This example assumes that you're already somewhat familiar with LangGraph
. If you're not, then please see the LangGraph Quickstart Guide for more details.
LangGraph
offers a lot of additional functionality (e.g., time-travel and interrupts) and will work well for other more complex (and realistic) architectures.
Details
import uuid
from IPython.display import Image, display
from langchain_core.messages import HumanMessage
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import START, MessagesState, StateGraph
# Define a new graph
workflow = StateGraph(state_schema=MessagesState)
# Define a chat model
model = ChatOpenAI()
# Define the function that calls the model
def call_model(state: MessagesState):
response = model.invoke(state["messages"])
# We return a list, because this will get added to the existing list
return {"messages": response}
# Define the two nodes we will cycle between
workflow.add_edge(START, "model")
workflow.add_node("model", call_model)
# Adding memory is straight forward in langgraph!
memory = MemorySaver()
app = workflow.compile(
checkpointer=memory
)
# The thread id is a unique key that identifies
# this particular conversation.
# We'll just generate a random uuid here.
# This enables a single application to manage conversations among multiple users.
thread_id = uuid.uuid4()
config = {"configurable": {"thread_id": thread_id}}
input_message = HumanMessage(content="hi! I'm bob")
for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
event["messages"][-1].pretty_print()
# Here, let's confirm that the AI remembers our name!
input_message = HumanMessage(content="what was my name?")
for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
event["messages"][-1].pretty_print()
================================[1m Human Message [0m=================================
hi! I'm bob
==================================[1m Ai Message [0m==================================
Hello Bob! How can I assist you today?
================================[1m Human Message [0m=================================
what was my name?
==================================[1m Ai Message [0m==================================
Your name is Bob. How can I help you today, Bob?