How To Run Llama 2 Locally

How To Run Llama 2 Locally?

In this guide we will explain how to run Llama 2 locally on your M1/M2 Mac, on Windows, on Linux, or even your phone.

Running Llama 2 Locally: A Guide

One of the highlights of using Llama 2 locally is its ability to function without an internet connection. Just a few days post its launch, there are already several methods to operate it on your personal devices. This guide delves into three open-source tools to achieve this:

  1. Llama.cpp (Available for Mac/Windows/Linux)
    • Overview: Llama.cpp is Llama’s C/C++ version, allowing local operation on Mac via 4-bit integer quantization. It’s also compatible with Linux and Windows.
    • Installation on M1/M2 Mac:
    • curl -L "" | bash
    • For Intel Mac or Linux: curl -L "" | bash
    • For Windows (WSL): curl -L "" | bash
  2. Ollama (For Mac)
    • Overview: Ollama, an open-source macOS application for Apple Silicon, provides an interface for operating, developing, and sharing vast language models. It’s also Llama 2 compatible.
    • Getting Started: Download the Ollama app at Post-installation, download Llama 2: ollama pull llama2or for a larger version: ollama pull llama2:13b
    • To interact with the model: ollama run llama2
    • Hardware Recommendations: Ensure a minimum of 8 GB RAM for the 3B model, 16 GB for the 7B model, and 32 GB for the 13B variant.
  3. MLC LLM (Llama on Mobile)
    • Overview: MLC LLM facilitates the operation of language models directly on devices like iOS and Android.
    • For iPhone: An MLC chat application is accessible on the App Store. While the latest versions of Llama 2 (7B, 13B, 70B) are supported, they remain in the beta phase. Thus, it’s not part of the standard App Store version. To sample it, one needs to employ TestFlight. Further instructions for beta installation are provided here.


In this article we gave you detailed instruction how to run Llama 2 Locally. After successful installation you can do Llama 2 Fine-tuning.

Read more related topics: