Papers
Topics
Authors
Recent
Search
2000 character limit reached

DNAHLM -- DNA sequence and Human Language mixed large language Model

Published 22 Oct 2024 in q-bio.GN and cs.LG | (2410.16917v2)

Abstract: There are already many DNA LLMs, but most of them still follow traditional uses, such as extracting sequence features for classification tasks. More innovative applications of LLMs, such as prompt engineering, RAG, and zero-shot or few-shot prediction, remain challenging for DNA-based models. The key issue lies in the fact that DNA models and human natural LLMs are entirely separate; however, techniques like prompt engineering require the use of natural language, thereby significantly limiting the application of DNA LLMs. This paper introduces a pre-trained model trained on the GPT-2 network, combining DNA sequences and English text, and uses a unified BPE tokenization method. We then convert classification and other downstream tasks into Alpaca format instruction data, and perform instruction fine-tuning on this pre-trained model to create a fine-tuned model capable of handling multiple tasks. The model has demonstrated its effectiveness in DNA related zero-shot prediction and multitask application. This research provides a highly promising direction for building a unified DNA sequence task framework.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.