This is an evaluation harness for GitChameleon, an AI coding benchmark that comprises 328 Python-based problems tha are conditioned on specific versions of popular libraries for scientific computing and web development.
Before you begin, ensure you have the following installed:
- Clone the repository:
git clone https://github.com/mrcabbage972/GitChameleonBenchmark.git
- Run the setup command:
make evals-setup
To evaluate your solution, execute the following command:
evaluate --solution-path SOLUTION_PATH [--workers WORKERS]
The success rates will be printed out and detailed logs will be written to an output file next to the solution file.
If you run into any bugs or have trouble using GitChameleon, please open an issue on GitHub so we can help:
Before opening a new issue, please search the existing issues to see if someone else has already reported your problem. When you do file an issue, include:
- What you expected to happen
- What actually happened (error messages, stack traces, screenshots)
- Steps to reproduce (a minimal code example or command)
- Your environment (OS, Python version, GitChameleon commit hash)
That extra detail helps us diagnose and fix things much faster.
@misc{misra2025gitchameleonevaluatingaicode,
title={GitChameleon: Evaluating AI Code Generation Against Python Library Version Incompatibilities},
author={Diganta Misra and Nizar Islah and Victor May and Brice Rauby and Zihan Wang and Justine Gehring and Antonio Orvieto and Muawiz Chaudhary and Eilif B. Muller and Irina Rish and Samira Ebrahimi Kahou and Massimo Caccia},
year={2025},
eprint={2507.12367},
archivePrefix={arXiv},
primaryClass={cs.SE},
url={https://arxiv.org/abs/2507.12367},
}