GitChameleon: Unmasking the Version-Switching Capabilities of Code Generation Models
đź“„ Full Paper
đź’¬ Ask
The rapid evolution of software libraries presents a significant challenge for code generation models. These models need to adapt to frequent version updates while maintaining compatibility with previous versions. Existing code completion benchmarks often overlook this dynamic aspect, and the one that does consider it relies on static code prediction tasks without execution-based evaluation, offering a limited perspective on a model’s practical usability.
To address this gap, researchers at Mila - Quebec AI Institute and ServiceNow Research introduce GitChameleon, a novel, manually curated dataset comprising 116 Python code completion problems, each conditioned on specific library versions and accompanied by executable unit tests. GitChameleon is designed to rigorously assess the ability of modern large language models (LLMs) to generate version-specific code that is not only syntactically correct but also functionally accurate upon execution.
The paper demonstrates that state-of-the-art LLMs struggle with this task. For example, GPT-4o achieves a pass@10 of only 39.9% (43.7% when provided with error feedback), highlighting the complexity of the problem and the limitations of current models.
Take for example, the problem: “Write a function that adds an edge to a NetworkX graph with color red using NetworkX==1.9.”. When faced with this task, GPT-4o initially generates code that uses a method that works in an older version of NetworkX but fails in version 1.9. The authors demonstrate that this happens across multiple code generation models, highlighting the importance of evaluating code generation models in a context where they have to adapt to real-world library version changes.
By providing an execution-based benchmark that emphasizes the dynamic nature of code libraries, GitChameleon serves as a critical tool to advance the development of more adaptable and reliable code generation models.
The paper also explores the effectiveness of utilizing error log feedback as a way to improve version conditioning performance. The authors find that providing error feedback can lead to significant improvements in model performance, especially for tasks that involve more complex API changes.
The authors believe that GitChameleon will be a valuable resource for the development of more robust and reliable code generation models. They also argue that the dataset will inspire advancements in LLMs’ continual version adaptability, which is crucial for the development of real-world applications like enterprise-level code migration.
Chat about this paper
To chat about this paper, you'll need a free Gemini API key from Google AI Studio.
Your API key will be stored securely in your browser's local storage.