Filter index breaks after updating/inserting new records
Description Hi, I’m seeing a flaky issue in being able to search for an object via text property filter after updating/inserting new items. I first saw this on 1.26.3, but I upgraded to 1.27.1 and...
View ArticleAlternatives to custom vectorizer for Weaviate Cloud?
We are currently using PyTorch and Transformers in a Vecroriser class that is using the model VECTOR_MODEL_NAME = “sentence-transformers/all-MiniLM-L6-v2” to vectorise SQL and Python and yaml files...
View ArticleLocally running RAG pipeline with Verba and Llama3 with Ollama
Thank you for the quick response. What does your file structure look like? I didn’t use Docker to launch Weaviate. I used the embedded path because the blog post said it would be the easiest. That’s...
View ArticleHow does encryption work on Weaviate?
@DudaNogueira Thank you for your reply. Am I correct in understanding that if I self-host Weaviate, I can enable features like encryption at rest or KMS integration as mentioned in the second link...
View ArticleLocally running RAG pipeline with Verba and Llama3 with Ollama
I think defining the schema will solve my problem. error message: localhost:8080/v1/schema
View ArticleInconsistent behaviour of with_where search
behaviour fixed by improving the RAM of the server hosting the deployment.
View ArticleCouldn't connect to Weaviate, check your URL/API KEY: [Errno 30] Read-only...
oh, thanks. I’ve logged on the front-end page of verba. and i started ollama serve and ollama run llama3 in the background, but I couldn’t get feedback from the model. like this: “Query failed: 500,...
View ArticleErrors: text too long for vectorization. Tokens for text: 10440, max tokens...
chunk_collection_definition = { "class": "DEmbeddings", "vectorizer": "text2vec-mistral", "moduleConfig": { "generative-mistral": {} }, "properties": [ { "name": "chunk", "dataType": ["text"], }, {...
View ArticleHow does encryption work on Weaviate?
Hi! Our hosted cloud has all the mentioned implementations, along with backups, easy upgrade, support, SLAs, etc. All that is already set up for you as part of our services. The same binary we release...
View ArticleLocally running RAG pipeline with Verba and Llama3 with Ollama
I believe docker is the easiest way Once you know how to play around with, it gets really easy to run apps. Also, you get a more production ready deployment, considering that embedded is still marked...
View ArticleInconsistent behaviour of with_where search
Oh Wow! First thanks for sharing, @lrx and secondly sorry, as we missed this message When you say improved, you mean you have increased weaviate allocated memory, right? Can you share the version you...
View ArticleHow does encryption work on Weaviate?
@DudaNogueira Thank you for your response, much appreciated.
View ArticleCouldn't connect to Weaviate, check your URL/API KEY: [Errno 30] Read-only...
Can you paste the entire error stack? This seems a 500 error in ollama. Do you see any outstanding errors in ~/.ollama/logs/server.log ?
View ArticleErrors: text too long for vectorization. Tokens for text: 10440, max tokens...
hi @Muhammad_Ashir ! You need to chunk your content before ingesting to the Database.
View ArticleErrors: text too long for vectorization. Tokens for text: 10440, max tokens...
I got you but I am concered about fetching because the techniques you have mentioned on documentation I am following that if I save it as chunk then in case of search I have to fetch the other chunks...
View ArticleVectorization failed 404 http://host.docker.internal:11434/api/embed
Description Running windows subsystem for linux (WSL2) with docker desktop running the containerization show from windows. I have ollama started with a model, works just fine when testing it with...
View ArticleVectorization failed 404 http://host.docker.internal:11434/api/embed
hi @Kieran_Sears !! Welcome to our community I was just playing around with Verba + Ollama all in docker I am not sure exactly how WSL2 plays with windows + docker, but can you try running everything...
View ArticleVectorization failed 404 http://host.docker.internal:11434/api/embed
So now, for example, if you want to add a new model, you should: For example, adding nomic-embed-text: docker compose exec -ti ollama ollama pull nomic-embed-text docker compose restart verba You...
View ArticleVectorization failed 404 http://host.docker.internal:11434/api/embed
Ps: While vectorizing large documents, I have faced this error: github.com/ollama/ollama Failed to acquire semaphore" error="context canceled" opened 02:16AM - 12 Jun 24 UTC closed 04:35PM - 07 Aug 24...
View Article