Quantcast
Channel: Weaviate Community Forum - Latest posts
Viewing all articles
Browse latest Browse all 3588

Locally running RAG pipeline with Verba and Llama3 with Ollama

$
0
0

Oh, nice!

Thanks for sharing!

I have also noticed ollama performing better when running directly on host instead of docker.

For one dataset it was importing in host, but not on docker.

I run mac, without GPU, so this may also affect it somehow.


Viewing all articles
Browse latest Browse all 3588

Trending Articles