Quantcast
Channel: Weaviate Community Forum - Latest posts
Viewing all articles
Browse latest Browse all 3605

Locally running RAG pipeline with Verba and Llama3 with Ollama

$
0
0

I was able to get it to work, not the way I wanted but it works.
I open weaviate db in a docker image, run ollama locally, and verba locally using the pip install goldenverba then verba start.
Here’s my docker-compose file"


networks:
  local-net:
    external: true
    name: local-net  # This is the Docker network that allows access to your local machine

services:
  weaviate:
    image: semitechnologies/weaviate:latest
    environment:
      - QUERY_DEFAULTS_LIMIT=20
      - ENABLE_MODULES=text2vec-verba
      - VERBA_API_URL=http://host.docker.internal:8000  # Access Verba on local port 8000
    ports:
      - "8080:8080"  # Expose Weaviate on port 8080
    networks:
      - local-net

Viewing all articles
Browse latest Browse all 3605

Trending Articles