疯狂24小时:一系列基于Mstral MOE模型微调模型、框架被网友开源

火山方舟向量数据库大模型

“ 太疯狂了,下周大概率是MOE的狂欢了,内容没什么好些的,提供2个地址,其他的直接拷贝readme了


        
          
https://github.com/open-compass/MixtralKit  

      

mixtral工具包


        
          
https://huggingface.co/DiscoResearch/DiscoLM-mixtral-8x7b-v2  

      

DiscoLM Mixtral 8x7b alpha是基于Mistral AI的Mixtral 8x7b的实验性8x7b MoE模型。该模型基于实验性代码,将模型权重转换为huggingface格式,并实现了基于Transformers的推理。然后,在Synthia、MethaMathQA和Capybara数据集上进行了微调。DiscoLM Mixtral 8x7b alpha是DiscoResearch项目,由Björn Plüster创建,并得到了社区的大力支持。

Table of Contents

  1. download
  2. benchmarks
  3. prompt-format
  4. datasets

Benchmarks

Huggingface Leaderboard

This model is still an early Alpha with experimental code and we can't guarantee that there all values are correct. The following are the scores from our own evaluation.

MetricValue
ARC (25-shot)67.32
HellaSwag (10-shot)86.25
MMLU (5-shot)70.72
TruthfulQA (0-shot)54.17
Winogrande (5-shot)80.72
GSM8k (5-shot)25.09 (bad score. no clue why)
Avg.64.05

        
          
评测脚本  
https://github.com/EleutherAI/lm-evaluation-harness  

      

FastEval

tbc

MTBench


        
          
{  
  "first\_turn": 7.89375,  
  "second\_turn": 7.5125,  
  "categories": {  
      "writing": 9.25,  
      "roleplay": 8.425,  
      "reasoning": 5.7,  
      "math": 5.85,  
      "coding": 4.45,  
      "extraction": 8.75,  
      "stem": 9.45,  
      "humanities": 9.75  
  },  
  "average": 7.703125  
}  

      

Prompt Format

Please note that you have to run the model with trust_remote_code=True until the new arch is merged into transformers!

This model follows the ChatML format:


        
          
<|im_start|>system  
You are DiscoLM, a helpful assistant.  
<|im_end|>  
<|im_start|>user  
Please tell me possible reasons to call a research collective "Disco Research"<|im_end|>  
<|im_start|>assistant  

      

This formatting is also available via a pre-defined Transformers chat template, which means that lists of messages can be formatted for you with the apply_chat_template() method:


        
          
chat = [  
  {"role": "system", "content": "You are DiscoLM, a helpful assistant."},  
  {"role": "user", "content": "Please tell me possible reasons to call a research collective Disco Research"}  
]  
tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)  

      

If you use tokenize=True and return_tensors="pt" instead, then you will get a tokenized and formatted conversation ready to pass to model.generate().

Basic inference code:


        
          
import torch  
from transformers import AutoModelForCausalLM, AutoTokenizer  
  
model = AutoModelForCausalLM.from_pretrained("DiscoResearch/DiscoLM-mixtral-8x7b-v2", low_cpu_mem_usage=True, device_map="auto", trust_remote_code=True)  
tok = AutoTokenizer.from_pretrained("DiscoResearch/DiscoLM-mixtral-8x7b-v2")  
chat = [  
  {"role": "system", "content": "You are DiscoLM, a helpful assistant."},  
  {"role": "user", "content": "Please tell me possible reasons to call a research collective Disco Research"}  
]  
x = tokenizer.apply_chat_template(chat, tokenize=True, return_tensors="pt", add_generation_prompt=True).cuda()  
x = model.generate(x, max_new_tokens=128).cpu()  
print(tok.batch_decode(x))  

      

Datasets

The following datasets were used for training DiscoLM Mixtral 8x7b alpha:


        
          
* [Synthia](https://huggingface.co/datasets/migtissera/Synthia-v1.3)  
* [MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA)  
* NousReseach Capybara (currently not public)  

      
0
0
0
0
关于作者
关于作者

文章

0

获赞

0

收藏

0

相关资源
从 ClickHouse 到 ByteHouse
《从ClickHouse到ByteHouse》白皮书客观分析了当前 ClickHouse 作为一款优秀的开源 OLAP 数据库所展示出来的技术性能特点与其典型的应用场景。
相关产品
评论
未登录
看完啦,登录分享一下感受吧~
暂无评论