Llama 2

Llama 2 is the next generation of Meta’s open source large language model.

Llama 2 was trained on 40% more data than Llama 1, and has double the context length. The chat models have further benefited from training on more than 1 million fresh human annotations. Included in this launch are the model weights and foundational code for pretrained and fine-tuned Llama language models, with sizes spanning from 7B to 70B parameters.

We are enthusiasts and lovers of this incredible tool. On this website we will be exploring its possibilities and sharing latest news about Llama 2 Model.

* The “Llama 2” name is a property of Meta. We are not affiliated with Meta.

  • LlamaIndex on Vertex AI

    LlamaIndex Team excited to partner with the Vertex AI team (@googlecloud) to feature a brand-new RAG API on Vertex, powered by @llama_index advanced modules that enable e2e indexing, embedding, retrieval, and generation. It is simultaneously easy to setup and use, while providing developers programmatic flexibility to connect a range of data sources (local, GCS, GDrive)…

    Continue reading

  • Building JavaScript agents in LlamaIndex.TS

    The ultimate guide to building agents in TypeScript is here! This guide takes you step-by-step through: What is an Agent? In LlamaIndex, an agent is a semi-autonomous piece of software powered by an LLM that is given a task and executes a series of steps towards solving that task. It is given a set of…

    Continue reading

  • Optimizing RAG with LLamaIndex

    A cool trick you can use to improve retrieval performance in your RAG pipelines is fine-tune the embedding model (bi-encoder) based on labels from a cross-encoder 💡 Cross-encoders are crucial for reranking but are way too slow for retrieving over large numbers of documents. This fine-tuning technique gives you all the speed advantages of direct…

    Continue reading