Blog

  • Run LLama-2 on Groq

    Run LLama-2 on Groq

    Groq is insanely fast, and we’re excited to feature an official integration with LlamaIndex. The @GroqInc LPU is specially designed for LLM generation and currently supports llama-2 and Mixtral models. About Groq Welcome to Groq! 🚀 Here at Groq, we are proud to introduce the world’s inaugural Language Processing Unitâ„¢, or LPU. This groundbreaking LPU…

    Continue reading

  • LlamaParse

    LlamaParse

    Today marks a significant milestone in the LlamaIndex ecosystem with the unveiling of LlamaCloud, the latest offering in managed parsing, ingestion, and retrieval services. This innovation is tailored to enhance the capabilities of your LLM and RAG applications by providing them with production-level context-augmentation. LlamaCloud enables enterprise AI engineers to concentrate on developing business logic…

    Continue reading

  • LLaMA Code Assistant 

    LLaMA Code Assistant 

    Coding assistants have revolutionized how developers work globally, offering a unique blend of convenience and efficiency. However, a common limitation has been their reliance on an internet connection, posing a challenge in scenarios like flights or areas without internet access. Enter an innovative solution that addresses this issue head-on: the ability to utilize a coding…

    Continue reading

  • CodeLlama 70B

    CodeLlama 70B

    CodeLlama-70B-Instruct achieves 67.8 on HumanEval, making it one of the highest performing open models available today. CodeLlama-70B is the most performant base for fine-tuning code generation models and we’re excited for the community to build on this work. Code Llama 70B models are available under the same license as Llama 2 and previous Code Llama…

    Continue reading

  • Llama Index Roadmap

    Llama Index Roadmap

    Llama Index Roadmap. We have big plans in 2024 to make the: Stay on top of the AI Ecosystem This is a living document (last updated – today) and will change month by month. Check it out on our Github discussions page! https://github.com/run-llama/llama_index/discussions/9888 Let us know your feedback/thoughts, and check out our contributing guide if…

    Continue reading

  • Ollama Multimodel Models

    Ollama Multimodel Models

    Ollama now supports multimodel models with v0.1.15! This allows the model to answer your prompt using what it sees. To run it, simply install Ollama, open a terminal, and type in `ollama run llava`. Then, all you need to do is type your prompt, and drag and drop an image. There is a new `images`parameter…

    Continue reading

  • Purple Llama

    Purple Llama

    Meta introduces Purple Llama, a project for open trust and safety tools. It aims to help developers use generative AI responsibly. The project aligns with best practices from Meta’s Responsible Use Guide. Meta’s first release includes CyberSec Eval, a cybersecurity benchmark for LLMs. They also introduce Llama Guard, a safety classifier for filtering. Llama Guard…

    Continue reading

  • Evaluate RAG with LlamaIndex

    Evaluate RAG with LlamaIndex

    In this notebook we will look into building an RAG pipeline and evaluating it with LlamaIndex. It has following 3 sections. Retrieval Augmented Generation (RAG) LLMs are trained on vast datasets, but these will not include your specific data. Retrieval-Augmented Generation (RAG) addresses this by dynamically incorporating your data during the generation process. This is…

    Continue reading

  • A Llama-2-based model finetuned for function calling

    A Llama-2-based model finetuned for function calling

    The Llama-2-7b-chat-hf-function-calling-v2 is a Llama-2-based model finetuned for function calling. Improvements with v2 Which model is best for what? Read related articles:

    Continue reading