LlamaIndex Team excited to partner with the Vertex AI team (@googlecloud) to feature a brand-new RAG API on Vertex, powered by @llama_index advanced modules that enable e2e indexing, embedding, retrieval, and generation.
It is simultaneously easy to setup and use, while providing developers programmatic flexibility to connect a range of data sources (local, GCS, GDrive) and file types (PDF, GDoc, Slides, Markdown) and experiment with both indexing and retrieval parameters.
Of course it supports all the latest LLMs:
- ✅ Gemini 1.5 Flash
- ✅ Gemini 1.5 Pro
- ✅ Gemini 1.0 models
Full examples, API reference, and pricing information are provided in the docs below.
LlamaIndex on Vertex AI Docs:
https://cloud.google.com/vertex-ai/generative-ai/docs/llamaindex-on-vertexai
Vertex I/O announcement blog:
https://cloud.google.com/blog/products/ai-machine-learning/vertex-ai-io-announcements
Read related articles: