-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Llamaindex managed Vertex AI index needs to be updated. #14637
Comments
Hey there, @wadave! I'm here to help you with any bugs, questions, or becoming a contributor. Let's squash those bugs together! 🤖 To address the issues with the
These updates should resolve the issues you encountered with the |
Hi Bot, it should be |
Thank you for the correction. Here is the updated code for the from typing import Any, List, Optional
from llama_index.core.base.base_retriever import BaseRetriever
from llama_index.core.schema import NodeWithScore, QueryBundle, TextNode
from google.cloud.aiplatform import telemetry
from vertexai.preview import rag
class VertexAIRetriever(BaseRetriever):
def __init__(
self,
corpus_name: str,
similarity_top_k: Optional[int] = None,
vector_distance_threshold: Optional[float] = 0.3,
user_agent: Optional[str] = None,
**kwargs: Any,
) -> None:
"""Initialize the Vertex AI Retriever."""
self.rag_resources = [rag.RagResource(rag_corpus=corpus_name)]
self._similarity_top_k = similarity_top_k
self._vector_distance_threshold = vector_distance_threshold
self._user_agent = user_agent or "llama-index/0.0.0"
def _retrieve(self, query_bundle: QueryBundle) -> List[NodeWithScore]:
"""Retrieve from the platform."""
with telemetry.tool_context_manager(self._user_agent):
response = rag.retrieval_query(
text=query_bundle.query_str,
rag_resources=self.rag_resources,
similarity_top_k=self._similarity_top_k,
vector_distance_threshold=self._vector_distance_threshold,
)
if response.contexts:
return [
NodeWithScore(node=TextNode(text=context.text), score=context.distance)
for context in response.contexts.contexts
]
else:
return []
async def _aretrieve(self, query_bundle: QueryBundle) -> List[NodeWithScore]:
"""Asynchronously retrieve from the platform."""
return self._retrieve(query_bundle=query_bundle) This should correctly handle the |
Bug Description
I tried this https://docs.llamaindex.ai/en/stable/examples/managed/VertexAIDemo/ but it gave errors.
print(index.list_files())
gavelist_files() got an unexpected keyword argument 'corpus'
It seems Vertex AI RAG api (
from vertexai.preview import rag
) evolved. The latest RAG API actually uses 'corpus_name' instead of 'corpus'.https://github.com/run-llama/llama_index/tree/main/llama-index-integrations/indices/llama-index-indices-managed-vertexai/llama_index/indices/managed/vertexai needs to be updated.
query_engine = index.as_query_engine()
gaveCould not load OpenAI model. If you intended to use OpenAI, please check your OPENAI_API_KEY.
at beginning, after I set up Settings.llm = vertex_gemini (nor sure if this is the right way), it gaveAttributeError: Unknown field for RetrieveContextsResponse: rag_contexts
Version
0.10.52
Steps to Reproduce
Run this notebook: https://docs.llamaindex.ai/en/stable/examples/managed/VertexAIDemo/
The errors will be reproducced.
Relevant Logs/Tracbacks
No response
The text was updated successfully, but these errors were encountered: