Meet LLama-2 competitor Zephyr-7B

Meet LLama-2 competitor Zephyr-7B

The LLama-2 competitor Zephyr-7B is a 7 billion parameter model, fine-tuned on Mistral-7b outperform Llama 2 70B Chat on MT Bench. You can read detailed comparison Mistral 7B vs LLaMa.

Zephyr is a series of language models designed to serve as helpful assistants.

Zephyr-7B-α is the first model in this series and represents a fine-tuned version of Mistral-7B-v0.1. It was trained on a combination of publicly available and synthetic datasets using Direct Preference Optimization (DPO) to improve its performance. Model outperform Llama-2-70B Chat on MT Bench.

Run Zephyr-7B-alpha with an API

You can run the Zephyr-7B Model using Clarifai’s python SDK.

Export your PAT as an environment variable

**export CLARIFAI_PAT={your personal access token}**

Check out the Code below to run the Model:import os from clarifai.client.model import Model system_message = "You are a friendly chatbot who always responds in the style of a pirate." prompt = "Write a tweet on future of AI" prompt_template = f"<|system|> \ {system_message}\ </s>\ <|user|>\ {prompt}</s>\ <|assistant|>" # Model Predict model_prediction = Model("https://clarifai.com/huggingface-research/zephyr/models/zephyr-7B-alpha").predict_by_bytes(prompt_template.encode(), "text") print(model_prediction.outputs[0].data.text.raw)

You can also run Zephyr-7B API using other Clarifai Client Libraries like Java, cURL, NodeJS, PHP, etc here.

Use Cases

Zephyr-7B-α was initially fine-tuned on a variant of the UltraChat dataset, which includes synthetic dialogues generated by ChatGPT. Further alignment was achieved using huggingface TRL’s DPOTrainer on the openbmb/UltraFeedback dataset, consisting of prompts and model completions ranked by GPT-4. This allows the model to be used for chat applications.

Limitations

Zephyr-7B-α has not been aligned to human preferences using techniques like Reinforcement Learning from Human Feedback (RLHF), nor has it undergone in-the-loop filtering of responses like ChatGPT. As a result, it can produce outputs that may be problematic, especially when intentionally prompted. It is also unknown what the size and composition of the corpus was used to train the base model (mistralai/Mistral-7B-v0.1), however it is likely to have included a mix of Web data and technical sources like books and code. See the Falcon 180B model card for an example of this.

Read related topics:


Tags: