Papers
Topics
Authors
Recent
Search
2000 character limit reached

When Language Model Meets Private Library

Published 31 Oct 2022 in cs.PL, cs.CL, and cs.SE | (2210.17236v1)

Abstract: With the rapid development of pre-training techniques, a number of LLMs have been pre-trained on large-scale code corpora and perform well in code generation. In this paper, we investigate how to equip pre-trained LLMs with the ability of code generation for private libraries. In practice, it is common for programmers to write code using private libraries. However, this is a challenge for LLMs since they have never seen private APIs during training. Motivated by the fact that private libraries usually come with elaborate API documentation, we propose a novel framework with two modules: the APIRetriever finds useful APIs, and then the APICoder generates code using these APIs. For APIRetriever, we present a dense retrieval system and also design a friendly interaction to involve uses. For APICoder, we can directly use off-the-shelf LLMs, or continually pre-train the base model on a code corpus containing API information. Both modules are trained with data from public libraries and can be generalized to private ones. Furthermore, we craft three benchmarks for private libraries, named TorchDataEval, MonkeyEval, and BeatNumEval. Experimental results demonstrate the impressive performance of our framework.

Citations (58)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.