Quantcast
Channel: Weaviate Community Forum - Latest posts
Viewing all articles
Browse latest Browse all 3588

Locally running RAG pipeline with Verba and Llama3 with Ollama

$
0
0

Description

I tried following the blog post, Locally running RAG pipeline with Verba and Llama3 with Ollama https://weaviate.io/blog/local-llm-with-verba-for-rag, to build locally and it won’t import the pdf. The document is less than 300 kb.
Error message:

✘ No documents imported 0 of 1 succesful tasks
ℹ FileStatus.ERROR | the-heros-journey-joseph-campbell.pdf | Import for
the-heros-journey-joseph-campbell.pdf failed: Import for
the-heros-journey-joseph-campbell.pdf failed: Batch vectorization failed:
Vectorization failed for some batches: 500, message='Internal Server Error',
url=URL('http://localhost:11434/api/embed') | 0

Server Setup Information

I followed the embed path of the blog post on a macbook pro. Locally running RAG pipeline with Verba and Llama3 with Ollama. I can get Ollama to work locally.

  • Weaviate Server Version:
  • Deployment Method: embed
  • Multi Node? Number of Running Nodes:
  • Client Language and Version: python
  • Multitenancy?:

Any additional Information


Viewing all articles
Browse latest Browse all 3588

Trending Articles