欢迎您访问 最编程 本站为您分享编程语言代码,编程技术文章!
您现在的位置是: 首页

低成本大模型单卡微调,尽在D4-XTuner

最编程 2024-01-23 14:00:43
...


D4-XTuner 大模型单卡低成本微调_微调

1 笔记了解

微调框架XTuner以及整个微调的工作流程。以InternLM为底座模型来进行微调。 

Finetune简介

D4-XTuner 大模型单卡低成本微调_学习_02

D4-XTuner 大模型单卡低成本微调_微调_03

D4-XTuner 大模型单卡低成本微调_微调_04

D4-XTuner 大模型单卡低成本微调_实战_05

D4-XTuner 大模型单卡低成本微调_微调_06

D4-XTuner 大模型单卡低成本微调_实战_07

D4-XTuner 大模型单卡低成本微调_微调_08


XTuner介绍


D4-XTuner 大模型单卡低成本微调_微调_09

8GB显卡玩转LLM

D4-XTuner 大模型单卡低成本微调_学习_10


2 动手实战

2.1 平台

Ubuntu + Anaconda + CUDA/CUDNN + 8GB nvidia显卡

D4-XTuner 大模型单卡低成本微调_实战_11

D4-XTuner 大模型单卡低成本微调_微调_12

D4-XTuner 大模型单卡低成本微调_微调_13

D4-XTuner 大模型单卡低成本微调_实战_14

D4-XTuner 大模型单卡低成本微调_微调_15

成功连接上了开发机。

2.2 安装

首先输入 bash :

D4-XTuner 大模型单卡低成本微调_微调_16

然后创建conda环境:

conda create --name xtuner0.1.9 pythnotallow=3.10 -y
# 激活环境
conda activate xtuner0.1.9
# 进入家目录 (~的意思是 “当前用户的home路径”)
cd ~
# 创建版本文件夹并进入,以跟随本教程
mkdir xtuner019 && cd xtuner019
# 拉取 0.1.9 的版本源码
git clone -b v0.1.9  https://github.com/InternLM/xtuner
# 无法访问github的用户请从 gitee 拉取:
# git clone -b v0.1.9 https://gitee.com/Internlm/xtuner

D4-XTuner 大模型单卡低成本微调_大模型_17

# 进入源码目录
cd xtuner

# 从源码安装 XTuner
pip install -e '.[all]'

D4-XTuner 大模型单卡低成本微调_实战_18

这里要等待好长一段时间:

D4-XTuner 大模型单卡低成本微调_学习_19

安装完后,就开始搞搞准备工作了。(准备在 oasst1 数据集上微调 internlm-7b-chat)

# 创建一个微调 oasst1 数据集的工作路径,进入
mkdir ~/ft-oasst1 && cd ~/ft-oasst1

2.3 微调

2.3.1 准备配置文件

XTuner 提供多个开箱即用的配置文件,用户可以通过下列命令查看:

# 列出所有内置配置
xtuner list-cfg

D4-XTuner 大模型单卡低成本微调_学习_20

拷贝一个配置文件到当前目录:

 # xtuner copy-cfg ${CONFIG_NAME} ${SAVE_PATH}

在本案例中即:(注意最后有个英文句号,代表复制到当前路径)

cd ~/ft-oasst1
xtuner copy-cfg internlm_chat_7b_qlora_oasst1_e3 .

D4-XTuner 大模型单卡低成本微调_大模型_21

配置文件名的解释:

xtuner copy-cfg internlm_chat_7b_qlora_oasst1_e3 .

模型名

internlm_chat_7b

使用算法

qlora

数据集

oasst1

把数据集跑几次

跑3次:e3 (epoch 3 )

*无 chat比如 internlm-7b 代表是基座(base)模型


2.3.2 模型下载

由于下载模型很慢,用教学平台的同学可以直接复制模型。

cp -r /root/share/temp/model_repos/internlm-chat-7b ~/ft-oasst1/

以下是自己下载模型的步骤。

不用 xtuner 默认的从 huggingface 拉取模型,而是提前从 OpenXLab ModelScope 下载模型到本地

# 创建一个目录,放模型文件,防止散落一地
mkdir ~/ft-oasst1/internlm-chat-7b

# 装一下拉取模型文件要用的库
pip install modelscope

# 从 modelscope 下载下载模型文件
cd ~/ft-oasst1
apt install git git-lfs -y
git lfs install
git lfs clone https://modelscope.cn/Shanghai_AI_Laboratory/internlm-chat-7b.git -b v1.0.3
2.3.3 数据集下载

https://huggingface.co/datasets/timdettmers/openassistant-guanaco/tree/main

由于 huggingface 网络问题,咱们已经给大家提前下载好了,复制到正确位置即可:

cd ~/ft-oasst1
# ...-guanaco 后面有个空格和英文句号啊
cp -r /root/share/temp/datasets/openassistant-guanaco .

此时,当前路径的文件应该长这样:

|-- internlm-chat-7b
|   |-- README.md
|   |-- config.json
|   |-- configuration.json
|   |-- configuration_internlm.py
|   |-- generation_config.json
|   |-- modeling_internlm.py
|   |-- pytorch_model-00001-of-00008.bin
|   |-- pytorch_model-00002-of-00008.bin
|   |-- pytorch_model-00003-of-00008.bin
|   |-- pytorch_model-00004-of-00008.bin
|   |-- pytorch_model-00005-of-00008.bin
|   |-- pytorch_model-00006-of-00008.bin
|   |-- pytorch_model-00007-of-00008.bin
|   |-- pytorch_model-00008-of-00008.bin
|   |-- pytorch_model.bin.index.json
|   |-- special_tokens_map.json
|   |-- tokenization_internlm.py
|   |-- tokenizer.model
|   `-- tokenizer_config.json
|-- internlm_chat_7b_qlora_oasst1_e3_copy.py
`-- openassistant-guanaco
    |-- openassistant_best_replies_eval.jsonl
    `-- openassistant_best_replies_train.jsonl
2.3.4 修改配置文件

修改其中的模型和数据集为 本地路径

cd ~/ft-oasst1
vim internlm_chat_7b_qlora_oasst1_e3_copy.py

D4-XTuner 大模型单卡低成本微调_实战_22

在vim界面完成修改后,请输入:wq退出。假如认为改错了可以用:q!退出且不保存。当然我们也可以考虑打开python文件直接修改,但注意修改完后需要按下Ctrl+S进行保存。

减号代表要删除的行,加号代表要增加的行。

# 修改模型为本地路径
- pretrained_model_name_or_path = 'internlm/internlm-chat-7b'
+ pretrained_model_name_or_path = './internlm-chat-7b'

# 修改训练数据集为本地路径
- data_path = 'timdettmers/openassistant-guanaco'
+ data_path = './openassistant-guanaco'

D4-XTuner 大模型单卡低成本微调_微调_23

D4-XTuner 大模型单卡低成本微调_大模型_24

常用超参

参数名

解释

data_path

数据路径或 HuggingFace 仓库名

max_length

单条数据最大 Token 数,超过则截断

pack_to_max_length

是否将多条短数据拼接到 max_length,提高 GPU 利用率

accumulative_counts

梯度累积,每多少次 backward 更新一次参数

evaluation_inputs

训练过程中,会根据给定的问题进行推理,便于观测训练状态

evaluation_freq

Evaluation 的评测间隔 iter 数

......

......

如果想把显卡的现存吃满,充分利用显卡资源,可以将 max_length 和 batch_size 这两个参数调大。

这里想快点,可以这个max_epochs改为1。

D4-XTuner 大模型单卡低成本微调_实战_25

2.3.5 开始微调

训练:

xtuner train ${CONFIG_NAME_OR_PATH}

也可以增加 deepspeed 进行训练加速:

xtuner train ${CONFIG_NAME_OR_PATH} --deepspeed deepspeed_zero2

例如,我们可以利用 QLoRA 算法在 oasst1 数据集上微调 InternLM-7B:

# 单卡
## 用刚才改好的config文件训练
xtuner train ./internlm_chat_7b_qlora_oasst1_e3_copy.py

# 多卡
NPROC_PER_NODE=${GPU_NUM} xtuner train ./internlm_chat_7b_qlora_oasst1_e3_copy.py

# 若要开启 deepspeed 加速,增加 --deepspeed deepspeed_zero2 即可
xtuner train ./internlm_chat_7b_qlora_oasst1_e3_copy.py

D4-XTuner 大模型单卡低成本微调_学习_26

这里其需要的时间有11多个小时:

D4-XTuner 大模型单卡低成本微调_大模型_27

微调得到的 PTH 模型文件和其他杂七杂八的文件都默认在当前的 ./work_dirs 中。

开启 deepspeed 加速,按Ctrl+C中断,重新运行:

rm -rf work_dirs/
xtuner train ./internlm_chat_7b_qlora_oasst1_e3_copy.py --deepspeed deepspeed_zero2

D4-XTuner 大模型单卡低成本微调_微调_28

再看,也要四五个小时:

D4-XTuner 大模型单卡低成本微调_实战_29

跑完训练后,当前路径应该长这样:

|-- internlm-chat-7b
|-- internlm_chat_7b_qlora_oasst1_e3_copy.py
|-- openassistant-guanaco
|   |-- openassistant_best_replies_eval.jsonl
|   `-- openassistant_best_replies_train.jsonl
`-- work_dirs
    `-- internlm_chat_7b_qlora_oasst1_e3_copy
        |-- 20231101_152923
        |   |-- 20231101_152923.log
        |   `-- vis_data
        |       |-- 20231101_152923.json
        |       |-- config.py
        |       `-- scalars.json
        |-- epoch_1.pth
        |-- epoch_2.pth
        |-- epoch_3.pth
        |-- internlm_chat_7b_qlora_oasst1_e3_copy.py
        `-- last_checkpoint

-------2024.1.13继续编辑

从今天早上开始运行,到晚上有八九个小时的时间(这里运行的max_epochs仍然是3):

D4-XTuner 大模型单卡低成本微调_微调_30

D4-XTuner 大模型单卡低成本微调_学习_31

D4-XTuner 大模型单卡低成本微调_学习_32

最后在晚上19:56运行结束:

D4-XTuner 大模型单卡低成本微调_大模型_33

在这里查看,每进行一次epoch训练,都会保存一次模型文件,这里就有epoch_1.pth  epoch_2.pth  epoch_3.pth三个获得的Lora模型文件:

D4-XTuner 大模型单卡低成本微调_实战_34


2.3.6 将得到的 PTH 模型转换为 HuggingFace 模型,即:生成 Adapter 文件夹

xtuner convert pth_to_hf ${CONFIG_NAME_OR_PATH} ${PTH_file_dir} ${SAVE_PATH}

在本示例中,为:

mkdir hf
export MKL_SERVICE_FORCE_INTEL=1

xtuner convert pth_to_hf ./internlm_chat_7b_qlora_oasst1_e3_copy.py ./work_dirs/internlm_chat_7b_qlora_oasst1_e3_copy/epoch_1.pth ./hf

D4-XTuner 大模型单卡低成本微调_实战_35

此时,路径中应该长这样:

|-- internlm-chat-7b
|-- internlm_chat_7b_qlora_oasst1_e3_copy.py
|-- openassistant-guanaco
|   |-- openassistant_best_replies_eval.jsonl
|   `-- openassistant_best_replies_train.jsonl
|-- hf
|   |-- README.md
|   |-- adapter_config.json
|   |-- adapter_model.bin
|   `-- xtuner_config.py
`-- work_dirs
    `-- internlm_chat_7b_qlora_oasst1_e3_copy
        |-- 20231101_152923
        |   |-- 20231101_152923.log
        |   `-- vis_data
        |       |-- 20231101_152923.json
        |       |-- config.py
        |       `-- scalars.json
        |-- epoch_1.pth
        |-- epoch_2.pth
        |-- epoch_3.pth
        |-- internlm_chat_7b_qlora_oasst1_e3_copy.py
        `-- last_checkpoint

此时,hf 文件夹即为我们平时所理解的所谓 “LoRA 模型文件”

可以简单理解:LoRA 模型文件 = Adapter

最后即得到:

D4-XTuner 大模型单卡低成本微调_实战_36


2.4 部署与测试

2.4.1 将 HuggingFace adapter 合并到大语言模型:
xtuner convert merge ./internlm-chat-7b ./hf ./merged --max-shard-size 2GB
# xtuner convert merge \
#     ${NAME_OR_PATH_TO_LLM} \
#     ${NAME_OR_PATH_TO_ADAPTER} \
#     ${SAVE_PATH} \
#     --max-shard-size 2GB

将自己的lora模型合并到底座模型上,并且保存在merge路径下,可以看到有八个模型分块:

D4-XTuner 大模型单卡低成本微调_大模型_37

这个可以去研究如何发布到HuggingFace模型仓库上。


2.4.2 与合并后的模型对话:

使用xtuner工具箱中的chat命令来和merged文件夹里面的大模型来对话,在命令后面指定参数为internlm_chat(这里是基于internlm_chat来调的):

# 加载 Adapter 模型对话(Float 16)
xtuner chat ./merged --prompt-template internlm_chat

如果是设备不太好,可以使用4 bit 量化加载:

# 4 bit 量化加载
# xtuner chat ./merged --bits 4 --prompt-template internlm_chat

加载好模型后,输入提问,然后敲两次回车(敲一次回车认为是换行):

D4-XTuner 大模型单卡低成本微调_微调_38

使用4 bit 量化加载可以看到回复速度明显加快。

这里就完成了微调和测试阶段了。


2.4.3 Demo
  • 修改 cli_demo.py 中的模型路径
- model_name_or_path = "/root/model/Shanghai_AI_Laboratory/internlm-chat-7b"
+ model_name_or_path = "merged"
  • 运行 cli_demo.py 以目测微调效果
python ./cli_demo.py

可以是这样比对:

xtuner chat ./merged --prompt-template internlm_chat --bits 4
xtuner chat ./merged --prompt-template internlm_chat --bits 4

D4-XTuner 大模型单卡低成本微调_大模型_39


xtuner chat 的启动参数

启动参数

干哈滴

--prompt-template

指定对话模板

--system

指定SYSTEM文本

--system-template

指定SYSTEM模板

--bits

LLM位数

--bot-name

bot名称

--with-plugins

指定要使用的插件

--no-streamer

是否启用流式传输

--lagent

是否使用lagent

--command-stop-word

命令停止词

--answer-stop-word

回答停止词

--offload-folder

存放模型权重的文件夹(或者已经卸载模型权重的文件夹)

--max-new-tokens

生成文本中允许的最大 token 数量

--temperature

温度值

--top-k

保留用于顶k筛选的最高概率词汇标记数

--top-p

如果设置为小于1的浮点数,仅保留概率相加高于 top_p 的最小一组最有可能的标记

--seed

用于可重现文本生成的随机种子

D4-XTuner 大模型单卡低成本微调_微调_40


附 2.3.5 微调

(xtuner0.1.9) root@intern-studio:~/ft-oasst1# xtuner train ./internlm_chat_7b_qlora_oasst1_e3_copy.py --deepspeed deepspeed_zero2
[2024-01-13 11:28:05,147] [INFO] [real_accelerator.py:161:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-01-13 11:28:20,864] [INFO] [real_accelerator.py:161:get_accelerator] Setting ds_accelerator to cuda (auto detect)
01/13 11:28:29 - mmengine - INFO -
------------------------------------------------------------
System environment:
    sys.platform: linux
    Python: 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0]
    CUDA available: True
    numpy_random_seed: 1261899922
    GPU 0: NVIDIA A100-SXM4-80GB
    CUDA_HOME: /usr/local/cuda
    NVCC: Cuda compilation tools, release 11.7, V11.7.99
    GCC: gcc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
    PyTorch: 2.1.2+cu121
    PyTorch compiling details: PyTorch built with:
  - GCC 9.3
  - C++ Version: 201703
  - Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v3.1.1 (Git Hash 64f6bcbcbab628e96f33a62c3e975f8535a7bde4)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX512
  - CUDA Runtime 12.1
  - NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90
  - CuDNN 8.9.2
  - Magma 2.6.1
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=12.1, CUDNN_VERSION=8.9.2, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-invalid-partial-specialization -Wno-unused-private-field -Wno-aligned-allocation-unavailable -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.1.2, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF,

    OpenCV: 4.9.0
    MMEngine: 0.10.2

Runtime environment:
    launcher: none
    randomness: {'seed': None, 'deterministic': False}
    cudnn_benchmark: False
    mp_cfg: {'mp_start_method': 'fork', 'opencv_num_threads': 0}
    dist_cfg: {'backend': 'nccl'}
    seed: None
    deterministic: False
    Distributed launcher: none
    Distributed training: False
    GPU number: 1
------------------------------------------------------------

01/13 11:28:29 - mmengine - INFO - Config:
SYSTEM = ''
accumulative_counts = 16
batch_size = 1
betas = (
    0.9,
    0.999,
)
custom_hooks = [
    dict(
        tokenizer=dict(
            padding_side='right',
            pretrained_model_name_or_path='./internlm-chat-7b',
            trust_remote_code=True,
            type='transformers.AutoTokenizer.from_pretrained'),
        type='xtuner.engine.DatasetInfoHook'),
    dict(
        evaluation_inputs=[
            '请给我介绍五个上海的景点',
            'Please tell me five scenic spots in Shanghai',
        ],
        every_n_iters=500,
        prompt_template='xtuner.utils.PROMPT_TEMPLATE.internlm_chat',
        system='',
        tokenizer=dict(
            padding_side='right',
            pretrained_model_name_or_path='./internlm-chat-7b',
            trust_remote_code=True,
            type='transformers.AutoTokenizer.from_pretrained'),
        type='xtuner.engine.EvaluateChatHook'),
]
data_path = './openassistant-guanaco'
dataloader_num_workers = 0
default_hooks = dict(
    checkpoint=dict(interval=1, type='mmengine.hooks.CheckpointHook'),
    logger=dict(interval=10, type='mmengine.hooks.LoggerHook'),
    param_scheduler=dict(type='mmengine.hooks.ParamSchedulerHook'),
    sampler_seed=dict(type='mmengine.hooks.DistSamplerSeedHook'),
    timer=dict(type='mmengine.hooks.IterTimerHook'))
env_cfg = dict(
    cudnn_benchmark=False,
    dist_cfg=dict(backend='nccl'),
    mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0))
evaluation_freq = 500
evaluation_inputs = [
    '请给我介绍五个上海的景点',
    'Please tell me five scenic spots in Shanghai',
]
launcher = 'none'
load_from = None
log_level = 'INFO'
lr = 0.0002
max_epochs = 3
max_length = 2048
max_norm = 1
model = dict(
    llm=dict(
        pretrained_model_name_or_path='./internlm-chat-7b',
        quantization_config=dict(
            bnb_4bit_compute_dtype='torch.float16',
            bnb_4bit_quant_type='nf4',
            bnb_4bit_use_double_quant=True,
            llm_int8_has_fp16_weight=False,
            llm_int8_threshold=6.0,
            load_in_4bit=True,
            load_in_8bit=False,
            type='transformers.BitsAndBytesConfig'),
        torch_dtype='torch.float16',
        trust_remote_code=True,
        type='transformers.AutoModelForCausalLM.from_pretrained'),
    lora=dict(
        bias='none',
        lora_alpha=16,
        lora_dropout=0.1,
        r=64,
        task_type='CAUSAL_LM',
        type='peft.LoraConfig'),
    type='xtuner.model.SupervisedFinetune')
optim_type = 'bitsandbytes.optim.PagedAdamW32bit'
optim_wrapper = dict(
    optimizer=dict(
        betas=(
            0.9,
            0.999,
        ),
        lr=0.0002,
        type='bitsandbytes.optim.PagedAdamW32bit',
        weight_decay=0),
    type='DeepSpeedOptimWrapper')
pack_to_max_length = True
param_scheduler = dict(
    T_max=3,
    by_epoch=True,
    convert_to_iter_based=True,
    eta_min=0.0,
    type='mmengine.optim.CosineAnnealingLR')
pretrained_model_name_or_path = './internlm-chat-7b'
prompt_template = 'xtuner.utils.PROMPT_TEMPLATE.internlm_chat'
randomness = dict(deterministic=False, seed=None)
resume = False
runner_type = 'FlexibleRunner'
strategy = dict(
    config=dict(
        bf16=dict(enabled=True),
        fp16=dict(enabled=False, initial_scale_power=16),
        gradient_accumulation_steps='auto',
        gradient_clipping='auto',
        train_micro_batch_size_per_gpu='auto',
        zero_allow_untested_optimizer=True,
        zero_force_ds_cpu_optimizer=False,
        zero_optimization=dict(overlap_comm=True, stage=2)),
    exclude_frozen_parameters=True,
    gradient_accumulation_steps=16,
    gradient_clipping=1,
    train_micro_batch_size_per_gpu=1,
    type='DeepSpeedStrategy')
tokenizer = dict(
    padding_side='right',
    pretrained_model_name_or_path='./internlm-chat-7b',
    trust_remote_code=True,
    type='transformers.AutoTokenizer.from_pretrained')
train_cfg = dict(by_epoch=True, max_epochs=3, val_interval=1)
train_dataloader = dict(
    batch_size=1,
    collate_fn=dict(type='xtuner.dataset.collate_fns.default_collate_fn'),
    dataset=dict(
        dataset=dict(
            path='./openassistant-guanaco', type='datasets.load_dataset'),
        dataset_map_fn='xtuner.dataset.map_fns.oasst1_map_fn',
        max_length=2048,
        pack_to_max_length=True,
        remove_unused_columns=True,
        shuffle_before_pack=True,
        template_map_fn=dict(
            template='xtuner.utils.PROMPT_TEMPLATE.internlm_chat',
            type='xtuner.dataset.map_fns.template_map_fn_factory'),
        tokenizer=dict(
            padding_side='right',
            pretrained_model_name_or_path='./internlm-chat-7b',
            trust_remote_code=True,
            type='transformers.AutoTokenizer.from_pretrained'),
        type='xtuner.dataset.process_hf_dataset'),
    num_workers=0,
    sampler=dict(shuffle=True, type='mmengine.dataset.DefaultSampler'))
train_dataset = dict(
    dataset=dict(path='./openassistant-guanaco', type='datasets.load_dataset'),
    dataset_map_fn='xtuner.dataset.map_fns.oasst1_map_fn',
    max_length=2048,
    pack_to_max_length=True,
    remove_unused_columns=True,
    shuffle_before_pack=True,
    template_map_fn=dict(
        template='xtuner.utils.PROMPT_TEMPLATE.internlm_chat',
        type='xtuner.dataset.map_fns.template_map_fn_factory'),
    tokenizer=dict(
        padding_side='right',
        pretrained_model_name_or_path='./internlm-chat-7b',
        trust_remote_code=True,
        type='transformers.AutoTokenizer.from_pretrained'),
    type='xtuner.dataset.process_hf_dataset')
visualizer = None
weight_decay = 0
work_dir = './work_dirs/internlm_chat_7b_qlora_oasst1_e3_copy'

01/13 11:28:30 - mmengine - WARNING - Failed to search registry with scope "mmengine" in the "builder" registry tree. As a workaround, the current "builder" registry in "xtuner" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmengine" is a correct scope, or whether the registry is initialized.
01/13 11:28:30 - mmengine - INFO - Hooks will be executed in the following order:
before_run:
(VERY_HIGH   ) RuntimeInfoHook
(BELOW_NORMAL) LoggerHook
 --------------------
before_train:
(VERY_HIGH   ) RuntimeInfoHook
(NORMAL      ) IterTimerHook
(NORMAL      ) DatasetInfoHook
(NORMAL      ) EvaluateChatHook
(VERY_LOW    ) CheckpointHook
 --------------------
before_train_epoch:
(VERY_HIGH   ) RuntimeInfoHook
(NORMAL      ) IterTimerHook
(NORMAL      ) DistSamplerSeedHook
 --------------------
before_train_iter:
(VERY_HIGH   ) RuntimeInfoHook
(NORMAL      ) IterTimerHook
 --------------------
after_train_iter:
(VERY_HIGH   ) RuntimeInfoHook
(NORMAL      ) IterTimerHook
(NORMAL      ) EvaluateChatHook
(BELOW_NORMAL) LoggerHook
(LOW         ) ParamSchedulerHook
(VERY_LOW    ) CheckpointHook
 --------------------
after_train_epoch:
(NORMAL      ) IterTimerHook
(LOW         ) ParamSchedulerHook
(VERY_LOW    ) CheckpointHook
 --------------------
before_val:
(VERY_HIGH   ) RuntimeInfoHook
(NORMAL      ) DatasetInfoHook
 --------------------
before_val_epoch:
(NORMAL      ) IterTimerHook
 --------------------
before_val_iter:
(NORMAL      ) IterTimerHook
 --------------------
after_val_iter:
(NORMAL      ) IterTimerHook
(BELOW_NORMAL) LoggerHook
 --------------------
after_val_epoch:
(VERY_HIGH   ) RuntimeInfoHook
(NORMAL      ) IterTimerHook
(BELOW_NORMAL) LoggerHook
(LOW         ) ParamSchedulerHook
(VERY_LOW    ) CheckpointHook
 --------------------
after_val:
(VERY_HIGH   ) RuntimeInfoHook
(NORMAL      ) EvaluateChatHook
 --------------------
after_train:
(VERY_HIGH   ) RuntimeInfoHook
(NORMAL      ) EvaluateChatHook
(VERY_LOW    ) CheckpointHook
 --------------------
before_test:
(VERY_HIGH   ) RuntimeInfoHook
(NORMAL      ) DatasetInfoHook
 --------------------
before_test_epoch:
(NORMAL      ) IterTimerHook
 --------------------
before_test_iter:
(NORMAL      ) IterTimerHook
 --------------------
after_test_iter:
(NORMAL      ) IterTimerHook
(BELOW_NORMAL) LoggerHook
 --------------------
after_test_epoch:
(VERY_HIGH   ) RuntimeInfoHook
(NORMAL      ) IterTimerHook
(BELOW_NORMAL) LoggerHook
 --------------------
after_test:
(VERY_HIGH   ) RuntimeInfoHook
 --------------------
after_run:
(BELOW_NORMAL) LoggerHook
 --------------------
Flattening the indices: 100%|█████████████████████████████████████████████| 9846/9846 [00:00<00:00, 32970.46 examples/s]Map: 100%|█████████████████████████████████████████████████████████████████| 9846/9846 [00:03<00:00, 2521.61 examples/s]01/13 11:28:36 - mmengine - WARNING - Dataset Dataset has no metainfo. ``dataset_meta`` in visualizer will be None.
quantization_config convert to <class 'transformers.utils.quantization_config.BitsAndBytesConfig'>
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████| 8/8 [00:10<00:00,  1.26s/it]01/13 11:28:51 - mmengine - INFO - dispatch internlm attn forward
01/13 11:28:51 - mmengine - WARNING - Due to the implementation of the PyTorch version of flash attention, even when the `output_attentions` flag is set to True, it is not possible to return the `attn_weights`.
[2024-01-13 11:29:11,722] [INFO] [logging.py:96:log_dist] [Rank -1] DeepSpeed info: version=0.12.6, git-hash=unknown, git-branch=unknown
[2024-01-13 11:29:11,722] [INFO] [comm.py:637:init_distributed] cdb=None
[2024-01-13 11:29:11,722] [INFO] [comm.py:652:init_distributed] Not using the DeepSpeed or dist launchers, attempting to detect MPI environment...
[2024-01-13 11:29:11,869] [INFO] [comm.py:702:mpi_discovery] Discovered MPI settings of world_rank=0, local_rank=0, world_size=1, master_addr=192.168.227.2, master_port=29500
[2024-01-13 11:29:11,869] [INFO] [comm.py:668:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl[2024-01-13 11:29:13,108] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False
[2024-01-13 11:29:13,113] [INFO] [logging.py:96:log_dist] [Rank 0] Using client Optimizer as basic optimizer
[2024-01-13 11:29:13,113] [INFO] [logging.py:96:log_dist] [Rank 0] Removing param_group that has no 'params' in the basic Optimizer
[2024-01-13 11:29:13,191] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Basic Optimizer = PagedAdamW32bit
[2024-01-13 11:29:13,191] [INFO] [utils.py:56:is_zero_supported_optimizer] Checking ZeRO support for optimizer=PagedAdamW32bit type=<class 'bitsandbytes.optim.adamw.PagedAdamW3