将基于Llama3-70B的模型上下文由8k扩展到524k、1048k

一项有意思的工作,Llama-3 70B的LoRA adapter,可以与任何基于Llama3-70b的模型一起运行(或合并),为其提供 524k1048k 上下文。

LoRA adapter是从gradientai/Llama-3-70B-Instruct-Gradient-524k/1048k中提取的,并使用meta-llama/Meta-Llama-3-70B-Instruct作为基础。

抽取方法:


        
            

          mergekit-extract-lora meta-llama/Meta-Llama-3-70B-Instruct gradientai/Llama-3-70B-Instruct-Gradient-1048k OUTPUT\_PATH --rank=32
        
      

picture.image picture.image

Gradient-AI训练方法:

  • 以Meta-Llama/Meta-Llama-3-70B-Instruct作为基础,
  • 使用NTK-aware插值来初始化RoPE theta的最优调度,然后进行经验性的RoPE theta优化。
  • 接着进行逐步训练,增加上下文长度

不同上下文长度(65k、262k、524k、1048k)的训练参数细节

picture.image

Llama-3 70B Gradient Instruct 524k大海捞针效果

picture.image

Llama-3 70B Gradient Instruct 1048k大海捞针效果

picture.image

将adapter与基于Llama3-70B的模型进行融合的方法:


          
# This supports merging as many adapters as you want.
          

          
# python merge_adapters.py --base_model_name_or_path <base_model> --peft_model_paths <adapter1> <adapter2> <adapter3> --output_dir <merged_model>
          

          
from transformers import AutoModelForCausalLM, AutoTokenizer
          
from peft import PeftModel
          
import torch
          
import os
          
import argparse
          

          
def get_args():
          
    parser = argparse.ArgumentParser()
          
    parser.add_argument("--base_model_name_or_path", type=str)
          
    parser.add_argument("--peft_model_paths", type=str, nargs='+', help="List of paths to PEFT models")
          
    parser.add_argument("--output_dir", type=str)
          
    parser.add_argument("--device", type=str, default="cpu")
          
    parser.add_argument("--push_to_hub", action="store_true")
          
    parser.add_argument("--trust_remote_code", action="store_true")
          
    return parser.parse_args()
          

          
def main():
          
    args = get_args()
          
    if args.device == 'auto':
          
        device_arg = {'device_map': 'auto'}
          
    else:
          
        device_arg = {'device_map': {"": args.device}}
          

          
    print(f"Loading base model: {args.base_model_name_or_path}")
          
    base_model = AutoModelForCausalLM.from_pretrained(
          
        args.base_model_name_or_path,
          
        return_dict=True,
          
        torch_dtype=torch.float16,
          
        trust_remote_code=args.trust_remote_code,
          
        **device_arg
          
    )
          

          
    model = base_model
          

          
    for peft_model_path in args.peft_model_paths:
          
        print(f"Loading PEFT: {peft_model_path}")
          
        model = PeftModel.from_pretrained(model, peft_model_path, **device_arg)
          
        print(f"Running merge_and_unload for {peft_model_path}")
          
        model = model.merge_and_unload()
          

          
    tokenizer = AutoTokenizer.from_pretrained(args.base_model_name_or_path)
          

          
    if args.push_to_hub:
          
        print(f"Saving to hub ...")
          
        model.push_to_hub(f"{args.output_dir}", use_temp_dir=False)
          
        tokenizer.push_to_hub(f"{args.output_dir}", use_temp_dir=False)
          
    else:
          
        model.save_pretrained(f"{args.output_dir}")
          
        tokenizer.save_pretrained(f"{args.output_dir}")
          

          
    print(f"Model saved to {args.output_dir}")
          

          
if __name__ == "__main__":
          
    main()
      

          
https://huggingface.co/cognitivecomputations/Llama-3-70B-Gradient-524k-adapter
          
https://huggingface.co/cognitivecomputations/Llama-3-70B-Gradient-1048k-adapter
          
merge_adapters.py https://gist.github.com/ehartford/731e3f7079db234fa1b79a01e09859a
      

推荐阅读


欢迎关注我的公众号“ PaperAgent ”, 每天一篇大模型(LLM)文章来锻炼我们的思维,简单的例子,不简单的方法,提升自己。

0
0
0
0
评论
未登录
暂无评论