Ape (AI Prompt Engineer)

Y Combinator S24LicenseGitHub stars
Ape Logo

Ape (AI prompt engineer) is a prompt optimization library with implementations of various state-of-the-art prompt optimization methods. Ape focuses on easier benchmarking, experimentation, and collaborative research of various techniques within the community. Ape makes it easy to apply and compare different prompt optimization techniques.

All prompt optimization techniques are implemented within a single file by inheriting from the Trainer class. These implementations are evaluated across multiple benchmarks using a unified testing format. All results obtained through this process are shared transparently.

Philosophy

Ape is not an LLM framework. Rather, it is a prompt optimization library that makes it easy to apply and compare different prompt optimization techniques.

Ape is designed to be agnostic to the LLM framework used. By subclassing the BaseGenerator class, you can easily plug in your LLM code, including libraries and frameworks like Langchain or OpenAI Assistants API.

Ape focuses on optimizing individual prompts rather than chained prompts or pipelines. The idea is that preparing datasets and optimizing each prompt separately makes the engineering process easier.