4 llama3仿openai api实验 本地服务器 #7

Open
opened 2024-10-20 21:41:30 +08:00 by 12390900721cs · 0 comments
  1. 部署Chinese-LLaMA-Alpaca-3项目;
  2. 封装一个私有的兼容openai api的大模型接口;
  3. 使用ChatGPTNextWeb开源工具调用此接口。

本文默认您在操作系统的用户名为:ganjialing,如果您使用的是其他用户名,请更改涉及到用户名的地址。

环境准备

源码下载

cd ~/ && wget https://file.huishiwei.top/Chinese-LLaMA-Alpaca-3-3.0.tar.gz
tar -xvf Chinese-LLaMA-Alpaca-3-3.0.tar.gz

安装miniconda

cd ~/ && wget https://repo.anaconda.com/miniconda/Miniconda3-py38_23.5.2-0-Linux-x86_64.sh
bash Miniconda3-py38_23.5.2-0-Linux-x86_64.sh

创建并激活虚拟环境

conda create -n chinese_llama_alpaca_3 python=3.8.17 pip -y
conda activate chinese_llama_alpaca_3

下载模型

pip install modelscope -i https://mirrors.aliyun.com/pypi/simple
modelscope download --model ChineseAlpacaGroup/llama-3-chinese-8b-instruct-v3

查看模型存储位置

ls -alhrt ~/.cache/modelscope/hub/ChineseAlpacaGroup/llama-3-chinese-8b-instruct-v3

开源版openai接口启动

中文llama3的开源版本实现在以下目录:/home/ganjialing/Chinese-LLaMA-Alpaca-3-3.0/scripts/oai_api_demo,下面分别是GPU和CPU版本的启动流程。

脚本bug修复

打开/home/llm_course/Chinese-LLaMA-Alpaca-3-3.0/scripts/oai_api_demo/openai_api_server.py文件,找到如下内容:

def stream_predict(
    input,
    max_new_tokens=1024,
    top_p=0.9,
    temperature=0.2,
    top_k=40,
    num_beams=4,
    repetition_penalty=1.1,
    do_sample=True,
    model_id="llama-3-chinese",
    **kwargs,
):
    choice_data = ChatCompletionResponseStreamChoice(
        index=0, delta=DeltaMessage(role="assistant"), finish_reason=None
    )
    chunk = ChatCompletionResponse(
        model=model_id,
        choices=[choice_data],
        object="chat.completion.chunk",
    )
    yield "{}".format(chunk.json(exclude_unset=True, ensure_ascii=False))

    if isinstance(input, str):
        prompt = generate_completion_prompt(input)
    else:
        prompt = generate_chat_prompt(input)
    inputs = tokenizer(prompt, return_tensors="pt")
    input_ids = inputs["input_ids"].to(device)
    generation_config = GenerationConfig(
        temperature=temperature,
        top_p=top_p,
        top_k=top_k,
        num_beams=num_beams,
        do_sample=do_sample,
        **kwargs,
    )

    streamer = TextIteratorStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
    generation_kwargs = dict(
        streamer=streamer,
        input_ids=input_ids,
        generation_config=generation_config,
        return_dict_in_generate=True,
        output_scores=False,
        max_new_tokens=max_new_tokens,
        repetition_penalty=float(repetition_penalty)
      )

将generation_kwargs改为如下值:

generation_kwargs = dict(
    streamer=streamer,
    input_ids=input_ids,
    generation_config=generation_config,
    return_dict_in_generate=True,
    output_scores=False,
    max_new_tokens=max_new_tokens,
    repetition_penalty=float(repetition_penalty),
    pad_token_id=tokenizer.eos_token_id, # 新添加的参数
    eos_token_id=[tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>")] # 新添加的参数
)

加入上面的参数主要是为了兼容llama3特有的停止token,不然流式接口返回的内容会不断的自动重复,不停止。

GPU版本接口启动

备份脚本

使用如下命令备份/home/ganjialing/Chinese-LLaMA-Alpaca-3-3.0/requirements.txt文件:

mv /home/ganjialing/Chinese-LLaMA-Alpaca-3-3.0/requirements.txt /home/ganjialing/Chinese-LLaMA-Alpaca-3-3.0/requirements.bk.tx

安装依赖

通过如下命令创建新的requirements.txt:

cat <<EOF > /home/ganjialing/Chinese-LLaMA-Alpaca-3-3.0/requirements.txt
accelerate==0.30.0
aiohttp==3.9.5
aiosignal==1.3.1
annotated-types==0.6.0
anyio==4.3.0
async-timeout==4.0.3
attrs==23.2.0
bitsandbytes==0.43.1
certifi==2024.2.2
charset-normalizer==3.3.2
click==8.1.7
datasets==2.20.0
deepspeed==0.13.1
dill==0.3.7
dnspython==2.6.1
einops==0.8.0
email_validator==2.1.1
exceptiongroup==1.2.1
fastapi==0.109.2
fastapi-cli==0.0.3
filelock==3.14.0
frozenlist==1.4.1
fsspec==2023.10.0
h11==0.14.0
hjson==3.1.0
httpcore==1.0.5
httptools==0.6.1
httpx==0.27.0
huggingface-hub==0.23.3
idna==3.7
Jinja2==3.1.4
joblib==1.4.2
markdown-it-py==3.0.0
MarkupSafe==2.1.5
mdurl==0.1.2
modelscope==1.17.1
mpmath==1.3.0
multidict==6.0.5
multiprocess==0.70.15
networkx==3.1
ninja==1.11.1.1
numpy==1.24.4
nvidia-cublas-cu12==12.1.3.1
nvidia-cuda-cupti-cu12==12.1.105
nvidia-cuda-nvrtc-cu12==12.1.105
nvidia-cuda-runtime-cu12==12.1.105
nvidia-cudnn-cu12==8.9.2.26
nvidia-cufft-cu12==11.0.2.54
nvidia-curand-cu12==10.3.2.106
nvidia-cusolver-cu12==11.4.5.107
nvidia-cusparse-cu12==12.1.0.106
nvidia-nccl-cu12==2.18.1
nvidia-nvjitlink-cu12==12.4.127
nvidia-nvtx-cu12==12.1.105
orjson==3.10.3
packaging==24.0
pandas==2.0.3
peft==0.7.1
psutil==5.9.8
py-cpuinfo==9.0.0
pyarrow==16.0.0
pyarrow-hotfix==0.6
pydantic==1.10.11
pydantic_core==2.18.2
Pygments==2.18.0
pynvml==11.5.0
python-dateutil==2.9.0.post0
python-decouple==3.8
python-dotenv==1.0.1
python-multipart==0.0.9
pytz==2024.1
PyYAML==6.0.1
regex==2024.4.28
requests==2.32.3
rich==13.7.1
safetensors==0.4.3
scikit-learn==1.3.2
scipy==1.10.1
shellingham==1.5.4
shortuuid==1.0.13
six==1.16.0
sniffio==1.3.1
sse-starlette==2.1.0
starlette==0.36.3
sympy==1.12
threadpoolctl==3.5.0
tokenizers==0.19.1
torch==2.1.2
tqdm==4.66.4
transformers==4.41.2
triton==2.1.0
typer==0.12.3
typing_extensions==4.11.0
tzdata==2024.1
ujson==5.9.0
urllib3==2.2.1
uvicorn==0.29.0
uvloop==0.19.0
watchfiles==0.21.0
websockets==12.0
xxhash==3.4.1
yarl==1.9.4
EOF

安装依赖:

pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple

启动大模型模型

python /home/ganjialing/Chinese-LLaMA-Alpaca-3-3.0/scripts/oai_api_demo/openai_api_server.py --gpus 0 --base_model /home/ganjialing/.cache/modelscope/hub/ChineseAlpacaGroup/llama-3-chinese-8b-instruct-v3

测试效果

安装ChatNextWeb

我们使用ChatGPTNextWeb工具测试我们的接口,下载:

Windows:https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web/releases/download/v2.14.2/NextChat_2.14.2_x64-setup.exe

接入大模型

设置

设置接口地址,若为本地则为localhost+端口号,服务器则需要替换;

API_KEY随便设置就好。

这里把ChatGPT提示信息去掉;

设置完后点这里关闭;

点这里选择模型;

选择llama-chinese;

测试可通。

CPU版本接口启动

创建CPU版本专用的虚拟环境

创造并启动conda环境:

conda create -n chinese_llama_alpaca_3_cpu python=3.8.17 pip -y
conda activate chinese_llama_alpaca_3_cpu

安装依赖:

pip3 install torch==2.3.0 --index-url https://download.pytorch.org/whl/cpu
pip3 install fastapi==0.111.0 peft==0.7.1 pydantic==1.10.11 pydantic_core==2.18.2 shortuuid==1.0.13 sse-starlette==2.1.0 starlette==0.37.2 transformers==4.41.2 -i https://mirrors.aliyun.com/pypi/simple

利用CPU启动大模型服务

python openai_api_server.py --only_cpu --base_model /home/ganjialing/.cache/modelscope/hub/ChineseAlpacaGroup/llama-3-chinese-8b-instruct-v3

测试效果

与CPU版本相同。

cpu还是慢一些的。

1. 部署<font style="color:black;">Chinese-LLaMA-Alpaca-3项目;</font> 2. <font style="color:black;">封装一个私有的兼容openai api的大模型接口;</font> 3. <font style="color:black;">使用ChatGPTNextWeb开源工具调用此接口。</font> <font style="color:black;"></font> <font style="color:black;">本文默认您在操作系统的用户名为:ganjialing,如果您使用的是其他用户名,请更改涉及到用户名的地址。</font> <font style="color:black;"></font> ## 环境准备 ### 源码下载 ```plain cd ~/ && wget https://file.huishiwei.top/Chinese-LLaMA-Alpaca-3-3.0.tar.gz tar -xvf Chinese-LLaMA-Alpaca-3-3.0.tar.gz ``` ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1728529512978-036308e4-1848-48dd-8414-e94b2bd93923.png) ### 安装miniconda ```plain cd ~/ && wget https://repo.anaconda.com/miniconda/Miniconda3-py38_23.5.2-0-Linux-x86_64.sh bash Miniconda3-py38_23.5.2-0-Linux-x86_64.sh ``` ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1728529528894-713186d8-b428-4a4b-a9f7-3e6ae2f0fe95.png) ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1728529544095-7f266f43-cae2-4dc3-87a6-a833db6472b7.png) ### 创建并激活虚拟环境 ```plain conda create -n chinese_llama_alpaca_3 python=3.8.17 pip -y conda activate chinese_llama_alpaca_3 ``` ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1728529624840-dbfb58e1-5eb2-45ee-884f-79009bc6bbd2.png) ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1728529635092-307a3567-a544-4446-94dc-2de1835fc4cc.png) ### 下载模型 ```plain pip install modelscope -i https://mirrors.aliyun.com/pypi/simple modelscope download --model ChineseAlpacaGroup/llama-3-chinese-8b-instruct-v3 ``` ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1728529662472-28c5ad99-4ae2-4401-94e7-e01677022a73.png) ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1728531711873-ef49ddb1-7bdc-4503-9028-cd3015fed59e.png) ### 查看模型存储位置 ```plain ls -alhrt ~/.cache/modelscope/hub/ChineseAlpacaGroup/llama-3-chinese-8b-instruct-v3 ``` ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1728531731157-606bda08-e56d-46d7-8c6e-1c76ac7e70c2.png) ## 开源版openai接口启动 <font style="color:black;">中文llama3的开源版本实现在以下目录:/home/ganjialing/Chinese-LLaMA-Alpaca-3-3.0/scripts/oai_api_demo,下面分别是GPU和CPU版本的启动流程。</font> <font style="color:black;"></font> ### <font style="color:black;">脚本bug修复</font> 打<font style="color:black;">开/home/llm_course/Chinese-LLaMA-Alpaca-3-3.0/scripts/oai_api_demo/openai_api_server.py文件,找到如下内容:</font> ```plain def stream_predict( input, max_new_tokens=1024, top_p=0.9, temperature=0.2, top_k=40, num_beams=4, repetition_penalty=1.1, do_sample=True, model_id="llama-3-chinese", **kwargs, ): choice_data = ChatCompletionResponseStreamChoice( index=0, delta=DeltaMessage(role="assistant"), finish_reason=None ) chunk = ChatCompletionResponse( model=model_id, choices=[choice_data], object="chat.completion.chunk", ) yield "{}".format(chunk.json(exclude_unset=True, ensure_ascii=False)) if isinstance(input, str): prompt = generate_completion_prompt(input) else: prompt = generate_chat_prompt(input) inputs = tokenizer(prompt, return_tensors="pt") input_ids = inputs["input_ids"].to(device) generation_config = GenerationConfig( temperature=temperature, top_p=top_p, top_k=top_k, num_beams=num_beams, do_sample=do_sample, **kwargs, ) streamer = TextIteratorStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) generation_kwargs = dict( streamer=streamer, input_ids=input_ids, generation_config=generation_config, return_dict_in_generate=True, output_scores=False, max_new_tokens=max_new_tokens, repetition_penalty=float(repetition_penalty) ) ``` <font style="color:black;"></font> <font style="color:black;">将generation_kwargs改为如下值:</font> ```plain generation_kwargs = dict( streamer=streamer, input_ids=input_ids, generation_config=generation_config, return_dict_in_generate=True, output_scores=False, max_new_tokens=max_new_tokens, repetition_penalty=float(repetition_penalty), pad_token_id=tokenizer.eos_token_id, # 新添加的参数 eos_token_id=[tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>")] # 新添加的参数 ) ``` <font style="color:black;">加入上面的参数主要是为了兼容llama3特有的停止token,不然流式接口返回的内容会不断的自动重复,不停止。</font> ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1728531919046-f8b3f27a-7cd8-4d4c-97a2-9f9fd89cad6b.png) ### GPU版本接口启动 #### 备份脚本 <font style="color:black;">使用如下命令备份/home/ganjialing/Chinese-LLaMA-Alpaca-3-3.0/requirements.txt文件:</font> ```plain mv /home/ganjialing/Chinese-LLaMA-Alpaca-3-3.0/requirements.txt /home/ganjialing/Chinese-LLaMA-Alpaca-3-3.0/requirements.bk.tx ``` ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1728532020859-524c4069-2616-43f8-b2e5-98d8d0c716f8.png) #### 安装依赖 <font style="color:black;">通过如下命令创建新的requirements.txt:</font> ```plain cat <<EOF > /home/ganjialing/Chinese-LLaMA-Alpaca-3-3.0/requirements.txt accelerate==0.30.0 aiohttp==3.9.5 aiosignal==1.3.1 annotated-types==0.6.0 anyio==4.3.0 async-timeout==4.0.3 attrs==23.2.0 bitsandbytes==0.43.1 certifi==2024.2.2 charset-normalizer==3.3.2 click==8.1.7 datasets==2.20.0 deepspeed==0.13.1 dill==0.3.7 dnspython==2.6.1 einops==0.8.0 email_validator==2.1.1 exceptiongroup==1.2.1 fastapi==0.109.2 fastapi-cli==0.0.3 filelock==3.14.0 frozenlist==1.4.1 fsspec==2023.10.0 h11==0.14.0 hjson==3.1.0 httpcore==1.0.5 httptools==0.6.1 httpx==0.27.0 huggingface-hub==0.23.3 idna==3.7 Jinja2==3.1.4 joblib==1.4.2 markdown-it-py==3.0.0 MarkupSafe==2.1.5 mdurl==0.1.2 modelscope==1.17.1 mpmath==1.3.0 multidict==6.0.5 multiprocess==0.70.15 networkx==3.1 ninja==1.11.1.1 numpy==1.24.4 nvidia-cublas-cu12==12.1.3.1 nvidia-cuda-cupti-cu12==12.1.105 nvidia-cuda-nvrtc-cu12==12.1.105 nvidia-cuda-runtime-cu12==12.1.105 nvidia-cudnn-cu12==8.9.2.26 nvidia-cufft-cu12==11.0.2.54 nvidia-curand-cu12==10.3.2.106 nvidia-cusolver-cu12==11.4.5.107 nvidia-cusparse-cu12==12.1.0.106 nvidia-nccl-cu12==2.18.1 nvidia-nvjitlink-cu12==12.4.127 nvidia-nvtx-cu12==12.1.105 orjson==3.10.3 packaging==24.0 pandas==2.0.3 peft==0.7.1 psutil==5.9.8 py-cpuinfo==9.0.0 pyarrow==16.0.0 pyarrow-hotfix==0.6 pydantic==1.10.11 pydantic_core==2.18.2 Pygments==2.18.0 pynvml==11.5.0 python-dateutil==2.9.0.post0 python-decouple==3.8 python-dotenv==1.0.1 python-multipart==0.0.9 pytz==2024.1 PyYAML==6.0.1 regex==2024.4.28 requests==2.32.3 rich==13.7.1 safetensors==0.4.3 scikit-learn==1.3.2 scipy==1.10.1 shellingham==1.5.4 shortuuid==1.0.13 six==1.16.0 sniffio==1.3.1 sse-starlette==2.1.0 starlette==0.36.3 sympy==1.12 threadpoolctl==3.5.0 tokenizers==0.19.1 torch==2.1.2 tqdm==4.66.4 transformers==4.41.2 triton==2.1.0 typer==0.12.3 typing_extensions==4.11.0 tzdata==2024.1 ujson==5.9.0 urllib3==2.2.1 uvicorn==0.29.0 uvloop==0.19.0 watchfiles==0.21.0 websockets==12.0 xxhash==3.4.1 yarl==1.9.4 EOF ``` ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1728532085699-de898c98-42f0-49e6-adaf-fb68ed701e4f.png) 安装依赖: ```plain pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple ``` ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1728532269335-81b92d63-1886-4fc2-9fab-13396ed2e6a4.png) #### 启动大模型模型 ```plain python /home/ganjialing/Chinese-LLaMA-Alpaca-3-3.0/scripts/oai_api_demo/openai_api_server.py --gpus 0 --base_model /home/ganjialing/.cache/modelscope/hub/ChineseAlpacaGroup/llama-3-chinese-8b-instruct-v3 ``` ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1728540046780-ace42e4d-a369-4926-a00e-0aafa8294434.png) #### 测试效果 ##### <font style="color:black;">安装ChatNextWeb</font> <font style="color:black;">我们使用ChatGPTNextWeb工具测试我们的接口,下载:</font> <font style="color:black;">Windows:</font>[https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web/releases/download/v2.14.2/NextChat_2.14.2_x64-setup.exe](https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web/releases/download/v2.14.2/NextChat_2.14.2_x64-setup.exe) ##### 接入大模型 设置 ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1728539920579-74c2b956-44aa-46ce-89d7-2e250cf0b296.png) 设置接口地址,若为本地则为localhost+端口号,服务器则需要替换; API_KEY随便设置就好。 ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1728539981573-27111ae0-a2ad-4fdd-80aa-bc990d2e3fae.png) 这里把ChatGPT提示信息去掉; ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1728539997332-9a1cab6f-ccae-4f50-9ff1-7f4bfcb76a9a.png) 设置完后点这里关闭; ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1728540068636-e47420a7-1664-4688-a3d2-a2b70a3d51ee.png) 点这里选择模型; ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1728540096895-fbae2006-f7fb-4492-9b31-a2f0fcd359cc.png) 选择llama-chinese; ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1728540086772-5043a90e-af3a-432c-a839-31caf2e3636b.png) 测试可通。 ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1728540332478-ccd34188-09f8-443a-9d9b-08381346f408.png) ### CPU版本接口启动 #### 创建CPU版本专用的虚拟环境 创造并启动conda环境: ```plain conda create -n chinese_llama_alpaca_3_cpu python=3.8.17 pip -y conda activate chinese_llama_alpaca_3_cpu ``` ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1728540856709-3658c32a-8e3e-4911-949a-946e06ed4e6a.png) ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1728540933060-a37fcab5-ad7e-4598-990b-ac729b12893a.png) 安装依赖: ```plain pip3 install torch==2.3.0 --index-url https://download.pytorch.org/whl/cpu pip3 install fastapi==0.111.0 peft==0.7.1 pydantic==1.10.11 pydantic_core==2.18.2 shortuuid==1.0.13 sse-starlette==2.1.0 starlette==0.37.2 transformers==4.41.2 -i https://mirrors.aliyun.com/pypi/simple ``` ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1728541026985-1abc3c19-a109-4e8c-8717-a022c82046aa.png) ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1728541038623-47463643-8f77-410d-a6fd-099da3ac8b24.png) #### 利用CPU启动大模型服务 ```plain python openai_api_server.py --only_cpu --base_model /home/ganjialing/.cache/modelscope/hub/ChineseAlpacaGroup/llama-3-chinese-8b-instruct-v3 ``` ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1728541459128-fb7d3f65-5c44-47ac-94aa-91a1feacc19d.png) #### 测试效果 与CPU版本相同。 ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1728541486183-f404394b-99c1-4b2e-92e1-7deec4c16471.png) cpu还是慢一些的。
Sign in to join this conversation.
No Label
No Milestone
No project
No Assignees
1 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: HswOAuth/llm_share#7
No description provided.