Quantcast
Channel: Weaviate Community Forum - Latest posts
Viewing all articles
Browse latest Browse all 3625

Issues with Batch Import and Vectorization

$
0
0

hi @Felix !!

Welcome to our community :hugs:

Unfortunately, there isn’t a one size fits all when it comes to batching. This is because there is a lot of variables here.

The dynamic batch will try it’s best to calculate the optimal batch size. It will take into consideration the latency between the client and the server.

This can work on some situations, but for example, when you are on a local environment, the latency can be too fast, and then you can overwhelm the server with a big batch size.

While it works on most situation, a good approach is like you did: start small and find the sweet spot combination between batch size and number of workers.

Other factos that will influentiate are the amount of resources you have allocated for your cluster, the size of your objects and of course, the throughput of your vectorizer models.

Regarding the score, and I have seen this confusion a lot, you need to understand that a similarity search will give you a distance, while a keyword search (bm25) will get you a score.

Hybrid will perform both a similarity search and a keyword search, so you end up with both score and distance, that Weaviate then fuses those two into normalized score. You can get each of those with the explain_score metadata.

In order to understand what can be going wrong with your code, can you provide a full reproducible example including at least some data? This really helps on reproducing the same behavior you are seeing, when we have a “end to end” example where we can also run. Usually a python notebook is the best way to share this kind of code.

Let me know if this helps.

And once again, welcome to our community :people_hugging:


Viewing all articles
Browse latest Browse all 3625

Trending Articles