GRPC trying to send message larger than max error : when trying...
I am just trying to query using collection.query.fetch_objects with filter on a property that takes the documentName. The query filter is something like this - filters = {"field": "documentId",...
View ArticleGRPC trying to send message larger than max error : when trying...
I see the limit is being set in the python-client. Is there a way to override it. I am confused as why simple query is maxing out the limit. github.com...
View ArticleGRPC trying to send message larger than max error : when trying...
How big are your objects? You can set which objects are returned using return_properties=[“prop1”,…]. by default all non-blob properties are returned
View ArticleConnection refused text2vec-ollama
Description Hello, I’m trying to setup Weaviate and Ollama, with both running in separate Docker containers, Weaviate is setup as default, and Ollama is can be accessed via port 11435. However when I...
View ArticleConnection refused text2vec-ollama
Have also tried setting the ModuleConfig when creating the schema class as: saporoSchema := &models.Class{ Class: "SaporoData2", Vectorizer: "text2vec-ollama", ModuleConfig:...
View ArticleConnection refused text2vec-ollama
Hi! Welcome to our community !! As you are using the ollama in docker, you should probably change: “apiEndpoint”: “http://localhost:11435”, to “apiEndpoint”: “http://ollama:11435”, Let me know if this...
View ArticleConnection refused text2vec-ollama
Thank you for the prompt response! Indeed that did the trick, must’ve messed something up earlier because as far as I could tell my moduleConfig wasn’t being used. Appreciate the help!
View ArticleGRCP connection failure when processing data-intensive batches
Good morning! A question, we are batch processing a set of data, we have noticed that after processing the first batch we started to receive this error: Query call with protocol GRPC search failed...
View ArticleHow to Use different embedding than OpenAI
Description in the Quickstart page you provided how to connect with openai but what if i wanted use ollama or open source models from huggingface, how to do it? try: questions =...
View ArticleWeaviate Openai Embedding Models
do we have any models for text2vec-openai embedding module which has token limit greater than 8192? the message i’m getting: weaviate.exceptions.UnexpectedStatusCodeException: Create class! Unexpected...
View ArticleHow to Use different embedding than OpenAI
lamoboos223: ollection may not have been created properly.! Unexpected status code: 422, with response body: {'error': [{'message': 'vectorizer: no module with name "text2vec-ollama" present'}]}. Hey!...
View ArticleWeaviate Openai Embedding Models
hi @spark !! by default, if you do not provide a model, it will use ada. However, you can use any of the supported models, as stated in the error message: ada babbage curie davinci...
View ArticleGRCP connection failure when processing data-intensive batches
hi @Nancy_Viviana_Espino !! What is the batch configuration you are using? We suggest using something like this as a base, and start tweaking the batch size and concurrent requests according to the...
View ArticleGRCP connection failure when processing data-intensive batches
Thank you for reviewing this case. I would like to clarify that we use batches mainly to process our vectors, and not to add data to the collection. However, we have faced a problem when searching for...
View ArticleGRCP connection failure when processing data-intensive batches
You mean that you ingest not only the data but also the vectors, right? You can also do that with batch: with collection.batch.dynamic() as batch: for i, data_row in enumerate(data_rows):...
View ArticleWeaviate Openai Embedding Models
I totally understand @DudaNogueira but could you please help me out in this regard which I’m facing, I was using the default. {'error': [{'message': "update vector: connection to: OpenAI API failed...
View ArticleGRCP connection failure when processing data-intensive batches
Effectively, we generate the vectors in another independent flow and, during this process we download them, upload them to a collection in order to process them and find the closest one for each...
View ArticleGoogle automatic token generation
I noticed that you guys aded automatic token generation with Google. Currently, I have to do gcloud auth print-access-token to get the token, but I wanted to use the automatic token generation that...
View ArticleColbertV2 Support
Hey both, thanks so much for sharing this notebook @DudaNogueira! Hey @JK_Rider, could you please point me to a more specific passage where this is mentioned? ColBERT / v2 / PLAID variants will all...
View ArticleColbertV2 Support
Quote: The ColBERT v2.0 library transforms a text chunk into a matrix of token-level embeddings. The output is a 128-dimensional vector for each token in a chunk. This results in a two-dimensional...
View Article