When Language Model Meets Private Library
Abstract: With the rapid development of pre-training techniques, a number of LLMs have been pre-trained on large-scale code corpora and perform well in code generation. In this paper, we investigate how to equip pre-trained LLMs with the ability of code generation for private libraries. In practice, it is common for programmers to write code using private libraries. However, this is a challenge for LLMs since they have never seen private APIs during training. Motivated by the fact that private libraries usually come with elaborate API documentation, we propose a novel framework with two modules: the APIRetriever finds useful APIs, and then the APICoder generates code using these APIs. For APIRetriever, we present a dense retrieval system and also design a friendly interaction to involve uses. For APICoder, we can directly use off-the-shelf LLMs, or continually pre-train the base model on a code corpus containing API information. Both modules are trained with data from public libraries and can be generalized to private ones. Furthermore, we craft three benchmarks for private libraries, named TorchDataEval, MonkeyEval, and BeatNumEval. Experimental results demonstrate the impressive performance of our framework.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.