A Llama-2-based model finetuned for function calling

A Llama-2-based model finetuned for function calling

The Llama-2-7b-chat-hf-function-calling-v2 is a Llama-2-based model finetuned for function calling.

  • Function calling Llama extends the hugging face Llama 2 models with function calling capabilities.
  • The model responds with a structured json argument with the function name and arguments.

Improvements with v2

  1. Shortened syntax: Only function descriptions are needed for inference and no added instruction is required.
  2. Function descriptions are moved outside of the system prompt. This avoids the behaviour of function calling being affected by how the system prompt had been trained to influence the model.

Which model is best for what?

  1. Larger models are better at handling function calling. The cross entropy training losses are approximately 0.5 for 7B, 0.4 for 13B, 0.3 for 70B. The absolute numbers don’t mean anything but the relative values offer a sense of relative performance.
  2. Provide very clear function descriptions, including whether the arguments are required or what the default values should be.
  3. Make sure to post-process the language model’s response to check that all necessary information is provided by the user. If not, prompt the user to let them know they need to provide more info (e.g. their name, order number etc.).

Read related articles:


Tags: