Thanks DudaNogueira.
I have tried this but this works with local ollama embeddings. With the Ollama embedding model hosted on a server, I need to pass in the URL as well as Auth Token for generating embeddings. I unable to find a way to pass in Auth Token to “text2vec-ollama” vectorizer.
Here is sample curl to access the embedding model from local:
curl https://ollama-inference-url/api/embeddings -H "Authorization: AUTH-TOKEN" -H "Content-Type: application/json" -d '{"model": "nomic-embed-text:latest", "prompt": "Why is sky blue?"}'