Hi @Just_Guide7361 !!
Thanks for pointing it out!!
I will take the opportunity and also write a recipe using the multi tenancy feature with langchain.
here is a working code using create_retrieval_chain
(I will update the recipe later today):
# ...
from weaviate.classes.query import Filter
# client = weaviate.connect_to_weaviate_cloud(...)
embeddings = OpenAIEmbeddings()
db = WeaviateVectorStore.from_documents([], embeddings, client=client, index_name="WikipediaLangChain")
source_file = "brazil-wikipedia-article-text.pdf"
#source_file = "netherlands-wikipedia-article-text.pdf"
where_filter = Filter.by_property("source").equal(source_file)
# we want our retriever to filter the results
retriever = db.as_retriever(search_kwargs={"filters": where_filter})
system_prompt = (
"You are an assistant for question-answering tasks. "
"Use the following pieces of retrieved context to answer "
"the question. If you don't know the answer, say that you "
"don't know. Use three sentences maximum and keep the "
"answer concise."
"\n\n"
"{context}"
)
prompt = ChatPromptTemplate.from_messages(
[
("system", system_prompt),
("human", "{input}"),
]
)
llm = ChatOpenAI(model="gpt-4o-mini")
question_answer_chain = create_stuff_documents_chain(llm, prompt)
rag_chain = create_retrieval_chain(retriever, question_answer_chain)
response = rag_chain.invoke({"input": "What is he traditional food of this country?"})
print(response["answer"])
By the way, we host a lot of online and in presence webinars and workshops. Check it out: Online Workshops & Events | Weaviate - Vector Database
Thanks and hope you are enjoying your “Weaviate journey”!!