Quantcast
Channel: Weaviate Community Forum - Latest posts
Viewing all articles
Browse latest Browse all 3590

Use ollama embeddings hosted on a server using weaviate

$
0
0

Thanks DudaNogueira.
I have tried this but this works with local ollama embeddings. With the Ollama embedding model hosted on a server, I need to pass in the URL as well as Auth Token for generating embeddings. I unable to find a way to pass in Auth Token to “text2vec-ollama” vectorizer.

Here is sample curl to access the embedding model from local:

curl https://ollama-inference-url/api/embeddings -H "Authorization: AUTH-TOKEN" -H "Content-Type: application/json" -d '{"model": "nomic-embed-text:latest", "prompt": "Why is sky blue?"}'

Viewing all articles
Browse latest Browse all 3590

Trending Articles