Semantic LLM caching¶
Save on tokens and latency with a LLM response cache based on semantic similarity (as opposed to exact match), powered by Vector Search.
NOTE: this uses Cassandra's "Vector Search" capability. Make sure you are connecting to a vector-enabled database for this demo.
The Cassandra-backed "semantic cache" for prompt responses is imported like this:
from langchain.cache import CassandraSemanticCache
As usual, a database connection is needed to access Cassandra. The following assumes that a vector-search-capable Astra DB instance is available. Adjust as needed.
from cqlsession import getCQLSession, getCQLKeyspace
cqlMode = 'astra_db' # 'astra_db'/'local'
session = getCQLSession(mode=cqlMode)
keyspace = getCQLKeyspace(mode=cqlMode)
An embedding function and an LLM are needed.
Below is the logic to instantiate the LLM and embeddings of choice. We chose to leave it in the notebooks for clarity.
import os
from llm_choice import suggestLLMProvider
llmProvider = suggestLLMProvider()
# (Alternatively set llmProvider to 'GCP_VertexAI', 'OpenAI', 'Azure_OpenAI' ... manually if you have credentials)
if llmProvider == 'GCP_VertexAI':
from langchain.llms import VertexAI
from langchain.embeddings import VertexAIEmbeddings
llm = VertexAI()
myEmbedding = VertexAIEmbeddings()
print('LLM+embeddings from Vertex AI')
elif llmProvider == 'OpenAI':
os.environ['OPENAI_API_TYPE'] = 'open_ai'
from langchain.llms import OpenAI
from langchain.embeddings import OpenAIEmbeddings
llm = OpenAI(temperature=0)
myEmbedding = OpenAIEmbeddings()
print('LLM+embeddings from OpenAI')
elif llmProvider == 'Azure_OpenAI':
os.environ['OPENAI_API_TYPE'] = 'azure'
os.environ['OPENAI_API_VERSION'] = os.environ['AZURE_OPENAI_API_VERSION']
os.environ['OPENAI_API_BASE'] = os.environ['AZURE_OPENAI_API_BASE']
os.environ['OPENAI_API_KEY'] = os.environ['AZURE_OPENAI_API_KEY']
from langchain.llms import AzureOpenAI
from langchain.embeddings import OpenAIEmbeddings
llm = AzureOpenAI(temperature=0, model_name=os.environ['AZURE_OPENAI_LLM_MODEL'],
engine=os.environ['AZURE_OPENAI_LLM_DEPLOYMENT'])
myEmbedding = OpenAIEmbeddings(model=os.environ['AZURE_OPENAI_EMBEDDINGS_MODEL'],
deployment=os.environ['AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT'])
print('LLM+embeddings from Azure OpenAI')
else:
raise ValueError('Unknown LLM provider.')
LLM+embeddings from OpenAI
Create the cache¶
At this point you can instantiate the semantic cache.
Note: in the following it is made clear, through the way the table
parameter is constructed, that different embeddings will require separate tables. This is done here to avoid mismatches when running this demo over and over with varying embedding functions: in most applications, where a single choice of embedding is made, there's no need to be this finicky and you cal usually leave the table name to its default value.
cassSemanticCache = CassandraSemanticCache(
session=session,
keyspace=keyspace,
embedding=myEmbedding,
table_name=f'semantic_cache_{llmProvider}',
)
Make sure the cache starts empty with:
cassSemanticCache.clear()
Configure the cache at a LangChain global level:
import langchain
langchain.llm_cache = cassSemanticCache
Use the cache¶
Now try submitting a few prompts to the LLM and pay attention to the response times.
If the LLM is actually run, they should be the order of a few seconds; but in case of a cache hit, it will be way less than a second.
Notice that you get a cache hit even after rephrasing the question.
%%time
SPIDER_QUESTION_FORM_1 = "How many eyes do spiders have?"
# A new question should take long
llm(SPIDER_QUESTION_FORM_1)
CPU times: user 36.3 ms, sys: 4.96 ms, total: 41.2 ms Wall time: 3.66 s
'\n\nMost spiders have eight eyes, although some have fewer or more.'
%%time
# Second time, very same question, this should be quick
llm(SPIDER_QUESTION_FORM_1)
CPU times: user 8.26 ms, sys: 3.16 ms, total: 11.4 ms Wall time: 403 ms
'\n\nMost spiders have eight eyes, although some have fewer or more.'
%%time
SPIDER_QUESTION_FORM_2 = "How many eyes does a spider generally have?"
# Just a rephrasing: but it's the same question,
# so it will just take the time to evaluate embeddings
llm(SPIDER_QUESTION_FORM_2)
CPU times: user 19.5 ms, sys: 770 µs, total: 20.2 ms Wall time: 1e+03 ms
'\n\nMost spiders have eight eyes, although some have fewer or more.'
Time for a really new question:
%%time
LOGIC_QUESTION_FORM_1 = "Is absence of proof the same as proof of absence?"
# A totally new question
llm(LOGIC_QUESTION_FORM_1)
CPU times: user 31.7 ms, sys: 3.96 ms, total: 35.7 ms Wall time: 3.33 s
'\n\nNo, absence of proof is not the same as proof of absence. Absence of proof means that there is no evidence to support a claim, while proof of absence means that there is evidence to support the claim that something does not exist.'
Going back to the same question as earlier (not literally, though):
%%time
SPIDER_QUESTION_FORM_3 = "How many eyes are on the head of a typical spider?"
# Trying to catch the cache off-guard :)
llm(SPIDER_QUESTION_FORM_3)
CPU times: user 20.1 ms, sys: 894 µs, total: 21 ms Wall time: 862 ms
'\n\nMost spiders have eight eyes, although some have fewer or more.'
And again to the logic riddle:
%%time
LOGIC_QUESTION_FORM_2 = "Is it true that the absence of a proof equates the proof of an absence?"
# Switching to the other question again
llm(LOGIC_QUESTION_FORM_2)
CPU times: user 20.5 ms, sys: 73 µs, total: 20.5 ms Wall time: 1.19 s
'\n\nNo, absence of proof is not the same as proof of absence. Absence of proof means that there is no evidence to support a claim, while proof of absence means that there is evidence to support the claim that something does not exist.'
Additional options¶
When creating the semantic cache, you can specify a few other options such as the metric used to calculate the similarity (and, accordingly, a corresponding threshold for accepting a "cache hit").
Here is an example which uses the L2 (Euclidean) metric:
anotherCassSemanticCache = CassandraSemanticCache(
session=session,
keyspace=keyspace,
embedding=myEmbedding,
table_name=f'semantic_cache_{llmProvider}',
distance_metric='l2',
score_threshold=0.4,
)
This cache builds on the same database table as the previous one, as can be seen e.g. with:
lookup = anotherCassSemanticCache.lookup_with_id_through_llm(
LOGIC_QUESTION_FORM_2,
llm,
)
if lookup:
cache_entry_id, response = lookup
print(f"cache_entry_id = {cache_entry_id}")
# `response` is a List[langchain.schema.output.Generation], so:
print(f"\n{response[0].text.strip()}")
else:
print('No match.')
cache_entry_id = 77add13036bcaa23c74ebf2ab2c56441-0e4d63bf605cd5f4329128fcbe38762d No, absence of proof is not the same as proof of absence. Absence of proof means that there is no evidence to support a claim, while proof of absence means that there is evidence to support the claim that something does not exist.
Caching and Chat Models¶
The CassandraCache
supports caching within chat-oriented LangChain abstractions such as ChatOpenAI
as well:
(warning: the following is demonstrated with OpenAI only for the time being)
from langchain.chat_models import ChatOpenAI
chat_llm = ChatOpenAI(model_name="gpt-3.5-turbo-16k", temperature=0)
%%time
print(chat_llm.predict("Can supernovae result in a black hole?"))
Yes, supernovae can result in the formation of a black hole. A supernova occurs when a massive star reaches the end of its life and undergoes a catastrophic explosion. The explosion expels most of the star's material into space, while the core collapses under its own gravity. If the core of the star is massive enough, typically more than three times the mass of the Sun, it will collapse further and form a black hole. This collapse is so intense that it creates a region of space with an extremely strong gravitational pull, from which nothing, not even light, can escape. This region is known as a black hole. CPU times: user 26 ms, sys: 11.6 ms, total: 37.6 ms Wall time: 5.3 s
%%time
# Expect a much faster response:
print(chat_llm.predict("Is it possible that black holes come from big exploding stars?"))
Yes, supernovae can result in the formation of a black hole. A supernova occurs when a massive star reaches the end of its life and undergoes a catastrophic explosion. The explosion expels most of the star's material into space, while the core collapses under its own gravity. If the core of the star is massive enough, typically more than three times the mass of the Sun, it will collapse further and form a black hole. This collapse is so intense that it creates a region of space with an extremely strong gravitational pull, from which nothing, not even light, can escape. This region is known as a black hole. CPU times: user 25.6 ms, sys: 311 µs, total: 26 ms Wall time: 686 ms
(Actually, every object which inherits from the LangChain Generation
class can be seamlessly store and retrieved in this cache.)
Stale entry control¶
Time-To-Live (TTL)¶
You can configure a time-to-live property of the cache, with the effect of automatic eviction of cached entries after a certain time.
Setting langchain.llm_cache
to the following will have the effect that entries vanish in an hour:
cacheWithTTL = CassandraSemanticCache(
session=session,
keyspace=keyspace,
embedding=myEmbedding,
table_name=f'semantic_cache_{llmProvider}',
ttl_seconds=3600,
)
Manual cache eviction¶
Alternatively, you can invalidate individual entries one at a time, just like you saw for the exact-match CassandraCache
cache.
But this is an index based on sentence similarity, so this time the procedure has two steps: first, a lookup to find the id of the matching document:
lookup = cassSemanticCache.lookup_with_id_through_llm(SPIDER_QUESTION_FORM_1, llm)
if lookup:
cache_entry_id, response = lookup
print(cache_entry_id)
else:
print('No match.')
0a1339bc659790da078a4352c05bf422-0e4d63bf605cd5f4329128fcbe38762d
you can see that querying for another form for the "same" question will result in the same id:
lookup2 = cassSemanticCache.lookup_with_id_through_llm(SPIDER_QUESTION_FORM_2, llm)
if lookup2:
cache_entry_id2, response2 = lookup2
print(cache_entry_id2)
else:
print('No match.')
0a1339bc659790da078a4352c05bf422-0e4d63bf605cd5f4329128fcbe38762d
and second, the document id is used in the actual cache eviction:
cassSemanticCache.delete_by_document_id(cache_entry_id)
As a check, try asking that question again and note the cell execution time (you can also try re-running the above lookup cell...):
%%time
llm(SPIDER_QUESTION_FORM_1)
CPU times: user 10 ms, sys: 432 µs, total: 10.4 ms Wall time: 1.19 s
'\n\nMost spiders have eight eyes, although some have fewer or more.'
Whole-cache deletion¶
Lastly, as you have seen earlier, you can empty the cache entirely with:
cassSemanticCache.clear()