Preprint Article Version 2 This version is not peer-reviewed

DemoCraft: Using In-Context Learning to Improve Code Generation in Large Language Models

Version 1 : Received: 15 June 2024 / Approved: 17 June 2024 / Online: 17 June 2024 (07:57:08 CEST)
Version 2 : Received: 30 October 2024 / Approved: 31 October 2024 / Online: 31 October 2024 (08:38:19 CET)

How to cite: Nirmal Joshua, K.; Sreejith, M. DemoCraft: Using In-Context Learning to Improve Code Generation in Large Language Models. Preprints 2024, 2024061105. https://doi.org/10.20944/preprints202406.1105.v2 Nirmal Joshua, K.; Sreejith, M. DemoCraft: Using In-Context Learning to Improve Code Generation in Large Language Models. Preprints 2024, 2024061105. https://doi.org/10.20944/preprints202406.1105.v2

Abstract

Generating executable code from natural language instructions using Large Language Models (LLMs) poses challenges such as semantic ambiguity and understanding task- specific contexts. To address these issues, we propose a system called DemoCraft, which enhances code generation by leveraging in-context learning and demonstration selection, combined with latent concept learning. Latent concept learning introduces additional concept tokens, which are trainable embeddings that capture task-specific knowledge. We then test our system on two major datasets: MBPP and HumanEval. Our experimental results demonstrate that the proposed system achieves an approximate 2x increase in the pass@k metric compared to baseline models. Furthermore, we introduce two novel evaluation metrics: correctness@k and similarity@k. Our empirical studies indicate that our system attains nearly a 3x improvement in these metrics as well.

Keywords

in-context learning; code generation; latent concept learning; demonstration selection; large language models

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.