Download PDFOpen PDF in browser

NoRA: Neuro-Evolution of Low-Rank Adaptation of Language Models

EasyChair Preprint 15133

8 pagesDate: September 28, 2024

Abstract

Large Language Models (LLMs) such as Llama and Mistral suffer from limited diversity, originality of thoughts on creative writing tasks, convergence to “one point” results, and potentially approaching the desired results from one path when fine-tuning. In this work, we develop an iterative approach to LLM alignment to further elevate the model’s capability of novel text generation on downstream tasks. Our method revolves around iteratively creating a population of LoRA adapters, aligning them with IRPO and consequently applying natural selection, customized crossover, and mutation. This effectively resulted an increase of accuracy of 13% on Phi-3-Mini128K-Instruct and 11% on Mistral-7B-V0.2-Instruct on a story telling dataset. Experiments on small creative writing tasks demonstrated the effectiveness of this method.

Keyphrases: Chatbots, Neuroevolution, evolution of low rank adaptation, large language models, low rank adaptation of language models, neural networks, reversed fitness scores

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:15133,
  author    = {Iheb Gafsi},
  title     = {NoRA: Neuro-Evolution of Low-Rank Adaptation of Language Models},
  howpublished = {EasyChair Preprint 15133},
  year      = {EasyChair, 2024}}
Download PDFOpen PDF in browser