Connecting LLMs with Evolutionary Algorithms

Connecting LLMs with Evolutionary Algorithms

In their paper “Connecting Large Language Models with Evolutionary Algorithms Yields Powerful Prompt Optimizers“, the authors highlight that while Large Language Models (LLMs) are adept at various tasks, they largely depend on meticulously designed prompts necessitating significant human input. Seeking to automate this task, the researchers introduce a fresh framework for discrete prompt optimization named EvoPrompt.

This innovative framework harnesses the capabilities of evolutionary algorithms (EAs), renowned for their efficient performance and swift convergence rates. To make EAs compatible with discrete prompts – linguistic expressions that should remain coherent and understandable to humans – they bridge the gap between LLMs and EAs. Such a combination is fruitful, drawing simultaneously from the sophisticated linguistic processing prowess of LLMs and the superior optimization abilities of EAs.

Without resorting to gradients or parameters, EvoPrompt sets off with an array of prompts. It then progressively crafts new prompts through LLMs, employing evolutionary operators, refining the prompt group based on the development dataset.

The authors have fine-tuned prompts for both proprietary and public LLMs, encompassing GPT-3.5 and Alpaca, using 9 diverse datasets related to language comprehension and creation tasks. EvoPrompt’s performance was noteworthy, surpassing manually designed prompts and other contemporary automatic prompt-generating methods by margins of up to 25% and 14%, respectively.

Moreover, the authors found that merging LLMs and EAs generated beneficial synergies, presenting a promising avenue for future exploration into integrating LLMs with traditional algorithms.

Read related articles:


Tags: