Llama

Meta Llama 3, the next generation of state-of-the-art open source large language model. This release features pretrained and instruction-fine-tuned language models with 8B and 70B parameters that can support a broad range of use cases. 

Llama 3 uses a tokenizer with a vocabulary of 128K tokens that encodes language much more efficiently, which leads to substantially improved model performance. 

We are enthusiasts and lovers of this incredible tool. On this website we will be exploring its possibilities and sharing latest news about Llama 3 Model.


  • Llama-4

    Ahmad Al-Dahle shared a glimpse into Meta’s massive AI project—training Llama 4 on a cluster with over 100,000 H100 GPUs! This scale is pushing AI boundaries and advancing both product capabilities and open-source contributions. Great to visit one of our data centers where we’re training Llama 4 models on a cluster bigger than 100K H100’s!…

    Continue reading

  • Meta Launches Quantized Llama Models

    Meta has announced a major advancement in AI technology by releasing its first lightweight quantized Llama models. These models, small and efficient enough to run on many popular mobile devices, represent a breakthrough in the field of artificial intelligence (AI), particularly in terms of accessibility and performance. What Makes Quantized Llama Models Stand Out? Meta’s…

    Continue reading

  • Llama 3.2

    The two largest models in the Llama 3.2 collection, the 11B and 90B, are designed for image reasoning tasks such as document-level comprehension, including interpreting charts and graphs, image captioning, and visual grounding tasks like identifying objects in images based on natural language prompts. For instance, someone could ask which month in the previous year…

    Continue reading