Quantcast
Channel: Weaviate Community Forum - Latest posts
Viewing all articles
Browse latest Browse all 3866

Dense search grcp error after restarting weaviate Docker

$
0
0

Description

I’m using Weaviate as vector store database for my LangChain FastAPI app, and it seems that every time I restart the weaviate Docker image, the previously created vector store will start throwing errors when querying.

Specifically this error will be returned when querying (hybrid search):

Traceback (most recent call last):
  File "venv/lib/python3.11/site-packages/weaviate/collections/grpc/query.py", line 649, in __call
    res, _ = self._connection.grpc_stub.Search.with_call(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "venv/lib/python3.11/site-packages/grpc/_channel.py", line 1198, in with_call
    return _end_unary_response_blocking(state, call, True, None)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "venv/lib/python3.11/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
    raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
        status = StatusCode.UNKNOWN
        details = "dense search: search index idx_27d57efa8da7ac6e31a654e5503f1f01: remote shard KkYLQghuTYXB: resolve node name "0cfd736f585d" to host"
        debug_error_string = "UNKNOWN:Error received from peer  {grpc_message:"dense search: search index idx_27d57efa8da7ac6e31a654e5503f1f01: remote shard KkYLQghuTYXB: resolve node name \"0cfd736f585d\" to host", grpc_status:2, created_time:"2024-09-02T14:26:53.369983+02:00"}"
>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "venv/lib/python3.11/site-packages/langchain_weaviate/vectorstores.py", line 279, in _perform_search
    result = collection.query.hybrid(
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "venv/lib/python3.11/site-packages/weaviate/collections/queries/hybrid/query.py", line 105, in hybrid
    res = self._query.hybrid(
          ^^^^^^^^^^^^^^^^^^^
  File "venv/lib/python3.11/site-packages/weaviate/collections/grpc/query.py", line 246, in hybrid
    return self.__call(request)
           ^^^^^^^^^^^^^^^^^^^^
  File "venv/lib/python3.11/site-packages/weaviate/collections/grpc/query.py", line 658, in __call
    raise WeaviateQueryError(e.details(), "GRPC search")  # pyright: ignore
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
weaviate.exceptions.WeaviateQueryError: Query call with protocol GRPC search failed with message dense search: search index idx_27d57efa8da7ac6e31a654e5503f1f01: remote shard KkYLQghuTYXB: resolve node name "0cfd736f585d" to host.

The query call is done from another Docker which runs the Langchain app within a FastAPI framework.

Do note that I’m not getting this error when I create the vector store from it first, and then query it directly after. It’s only after I restart the Weaviate Docker that the previously created vector store will start returning this error when queried from my app.

Server Setup Information

  • Weaviate Server Version: 1.26.3 ( Docker image from semitechnologies/weaviate:latest )
  • Deployment Method: Docker
  • Multi Node? Number of Running Nodes:
    I’m not sure I’m deploying it through:
docker run -p 8080:8080 -p 50051:50051 --env-file .env.local -e QUERY_DEFAULTS_LIMIT=20 -e AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED=true -e PERSISTENCE_DATA_PATH=/data -v weaviate_data:/data semitechnologies/weaviate:latest

where .env.local:

ENVIRONMENT=local
HOST=0.0.0.0
PORT=80
ACCESSIBLE_HOST=localhost
ACCESSIBLE_PORT=8000
GOOGLE_APPLICATION_CREDENTIALS=/code/app/auth.json
WEAVIATE_SERVER_URL=http://weaviate:8080
WEAVIATE_SERVER_GRPC_PORT=50051
OPENAI_API_KEY=<key>
  • Client Language and Version:
    langchain-weaviate==0.0.2
    weaviate-client==4.6.4
    python 3.11.9
  • Multitenancy?:
    Not that I know of, using default configs

Additional information

The vector store is created through the LangChain FastAPI backend using langchain-weaviate:

WeaviateVectorStore.from_documents(
    documents,
    client=WeaviateClient(connection_params=ConnectionParams.from_url("http://localhost:8080", 50051)),
    embedding=OpenAIEmbeddings(api_key=os.environ["OPENAI_API_KEY"]),
    index_name=index_name
)

loaded through:

WeaviateVectorStore(
    client=WeaviateClient(connection_params=ConnectionParams.from_url("http://localhost:8080", 50051)),
    embedding=OpenAIEmbeddings(api_key=os.environ["OPENAI_API_KEY"]),
    index_name=index_name,
    text_key="text"
)

and queried through:

WeaviateVectorStore.as_retriever(search_type="similarity",search_kwargs={"k": 50, "filters": retriever_filters})

Viewing all articles
Browse latest Browse all 3866

Trending Articles