11 多模态案例实操 #27

Open
opened 2024-10-23 12:53:14 +08:00 by 12390900721cs · 0 comments

BLIP和BLIP 2模型的使用

BLIP模型使用

创建实例

环境配置

克隆BLIP代码库

cd /root/autodl-tmp/
git clone https://github.com/salesforce/BLIP.git

#如果你使用的是conda环境,可以尝试通过conda来安装tokenizers,这样可以避免直接编译的问题:
conda install -c conda-forge tokenizers

配置环境

cd BLIP/
pip install -r requirements.txt

检查pytorch是否可以调⽤GPU

python

import torch

# 检查是否有可⽤的CUDA设备
if torch.cuda.is_available():
    print("CUDA is available. PyTorch can use GPU.")
    print("Number of GPUs available: ", torch.cuda.device_count())
    print("Name of the CUDA device: ", torch.cuda.get_device_name(0))
else:
  print("CUDA is not available. PyTorch can only use CPU.")

下载模型

下载 bert-base-uncased模型

pip install -U huggingface_hub
# 设置huggingface⽹站镜像
export HF_HOME=/root/autodl-tmp/huggingface-cache/
export HF_ENDPOINT=https://hf-mirror.com
cd /root/autodl-tmp/BLIP/
# 下载模型
huggingface-cli download --resume-download --local-dir-use-symlinks False google-bert/bert-base-uncased --local-dir bert-base-uncased

下载Image Captioning模型

  • 本地下载模型,然后上传到autodl:Image Captioning模型
  • 上传到/root/autodl-tmp/BLIP/路径下:

上传成功。

下载VQA模型

  • 本地下载模型,然后上传到autodl:VQA模型
  • 上传到/root/autodl-tmp/BLIP/路径下:

上传成功。

执行任务

模型执行 Image Captioning 任务

Image Captioning(图像生成描述)任务是计算机视觉和自然语言处理领域中的一个交叉任务,其要求模型能够自动生成描述给定图像内容的文本。这个任务不仅需要模型能理解图像中的视觉信息,还需要将这些信息转换成自然语言的描述。

  • 在/root/autodl-tmp/BLIP下创建一个jupyter note文件

这里我命名为Image_Captioning_Demo.ipynb。

  • 创建处理图片的函数
from PIL import Image
import requests
import torch
from torchvision import transforms
from torchvision.transforms.functional import InterpolationMode

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

def load_demo_image(image_size,device):
# 填入想要操作的图片url
    img_url = 'https://wallpaperaccess.com/full/497579.jpg' 
    raw_image = Image.open(requests.get(img_url, stream=True).raw).convert ('RGB')   
    w,h = raw_image.size
    display(raw_image.resize((w//5,h//5)))
    transform = transforms.Compose([
    transforms.Resize((image_size,image_size),interpolation=InterpolationMode.BICUBIC),
    transforms.ToTensor(),
    transforms.Normalize((0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711))
        ])
    image = transform(raw_image).unsqueeze(0).to(device)   
    return image

  • 使用微调的 BLIP 模型(Image Captioning模型)执行图像的Image Captioning生成
from models.blip import blip_decoder
image_size = 384
image = load_demo_image(image_size=image_size, device=device)
# 模型地址
model_url = '/root/autodl-tmp/BLIP/model_base_caption_capfilt_large.pth'
model = blip_decoder(pretrained=model_url, image_size=image_size, vit='base')
model.eval()
model = model.to(device)
with torch.no_grad():
    # beam search
    caption = model.generate(image, sample=False, num_beams=3, max_length=20, min_length=5) 
    # nucleus sampling
    # caption = model.generate(image, sample=True, top_p=0.9, max_length=20, min_length=5) 
    print('caption: '+caption[0])

  • 测试结果

模型执行VQA任务

VQA任务:给定一张图像和一个自然语言问题,要求机器生成一个自然语言答案。

  • 在/root/autodl-tmp/BLIP下创建一个jupyter note文件。

  • 创建处理图片的函数
from PIL import Image
import requests
import torch
from torchvision import transforms
from torchvision.transforms.functional import InterpolationMode

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

def load_demo_image(image_size,device):
  img_url = 'https://wallpaperaccess.com/full/497579.jpg' 
  raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')   
  w,h = raw_image.size
  display(raw_image.resize((w//5,h//5)))
  transform = transforms.Compose([
      transforms.Resize((image_size,image_size),interpolation=Interpolat 
      ionMode.BICUBIC),
      transforms.ToTensor(),
      transforms.Normalize((0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711))
      ])
  image = transform(raw_image).unsqueeze(0).to(device)   
  return image

  • 模型执行VQA任务
from models.blip_vqa import blip_vqa

image_size = 480
image = load_demo_image(image_size=image_size, device=device)     
model_url = '/root/autodl-tmp/BLIP/model_base_vqa_capfilt_large.pth'

model = blip_vqa(pretrained=model_url, image_size=image_size, vit='base') 
model.eval()

model = model.to(device)

question = 'How many animals are in the picture?' 
# question = 'Describe this picture?'
with torch.no_grad():
  answer = model(image, question, train=False, inference='generate') 
  print('answer: '+answer[0])

  • 测试结果

BLIP2模型使用

创建实例

使用BLIP模型相同的实例。

环境配置

pip install -U huggingface_hub
export HF_HOME=/root/autodl-tmp/huggingface-cache/ 
export HF_ENDPOINT=https://hf-mirror.com
cd /root/autodl-tmp/
pip install git+https://github.com/huggingface/transformers.git
mkdir BLIP2

下载模型

使用hugging face镜像网站下载或者手动下载后上传

cd /root/autodl-tmp/BLIP2/
# 下载模型
huggingface-cli download --resume-download --local-dir-use-symlinks False Salesforce/blip2-opt-2.7b --local-dir blip2-opt-2.7b

执行任务

在/root/autodl-tmp/BLIP2/下创建jupyter note文件。

  • 基于URL获得图片
import requests
from PIL import Image
# url = 'https://img95.699pic.com/photo/40242/9317.jpg_wh860.jpg'
url = 'https://wallpaperaccess.com/full/497579.jpg'
image = Image.open(requests.get(url, stream=True).raw).convert('RGB')  
display(image.resize((596, 437)))

  • 导入模型
from transformers import AutoProcessor, Blip2ForConditionalGeneration 
import torch
processor = AutoProcessor.from_pretrained("/root/autodl-tmp/BLIP2/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("/root/autodl-tmp/BLIP2/blip2-opt-2.7b", torch_dtype=torch.float16)
device = "cuda" if torch.cuda.is_available() else "cpu" 
model.to(device)

执行image caption任务

inputs = processor(image, return_tensors="pt").to(device, torch.float16)
generated_ids = model.generate(**inputs, max_new_tokens=20)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip()
print(generated_text)

执行image caption with prompt任务

根据给定的图像和自然语言提示生成更为精准和相关的图像描述。

通过提供文本提示来扩展图像字幕生成,模型将在给定图像的情况下接着提示词往下补充。

prompt = "this is a photo of"
inputs = processor(image, text=prompt, return_tensors="pt").to(device, torch.float16)
generated_ids = model.generate(**inputs, max_new_tokens=20)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip()
print(generated_text)

执行VQA任务

用于视觉问答时,提示必须遵循特定格式: "Question: {} Answer:”

prompt = "Question: What is the cat look like? Answer:"
inputs = processor(image, text=prompt, return_tensors="pt").to(device, torch.float16)
generated_ids = model.generate(**inputs, max_new_tokens=10)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip()
print(generated_text)

基于聊天的提示

  1. 通过拼接对话中每轮的问题和回答来创建类似 ChatGPT 的体验。

  2. 用某个提示来问模型,模型会为它生成一个答案,可以把这一问一答拼接到对话中。然后再来一轮,这样就把上下文 (context) 建立起来了。

  3. 上下文不能超过 512 tokens,因为 BLIP-2 使用的语言模型 (T5) 的上下文⻓度为512 tokens。

context = [
  ("What is the cat look like?", "The cat is a small, black and white cat"),
  ("Where are they?", "On the grass.")
]
question = "What for?"
template = "Question: {} Answer: {}."

prompt = " ".join([template.format(context[i][0], context[i][1]) for i in range(len(context))]) + " Question: " + question + " Answer:"

print(prompt)

inputs = processor(image, text=prompt, return_tensors="pt").to(device, torch.float16)

generated_ids = model.generate(**inputs, max_new_tokens=10)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip()
print(generated_text)

MiniGPT-V2模型

创建实例

环境配置

  • 下载代码
cd /root/autodl-tmp/
git clone https://github.com/Vision-CAIR/MiniGPT-4.git 
cd MiniGPT-4
  • 初始化conda init然后重启shell
  • 到/root/autodl-tmp/MiniGPT-4下复制一份environment.yml

  • environment-Copy1.yml修改为:

  • 创建conda环境
cd /root/autodl-tmp/MiniGPT-4
conda env create -f environment-Copy1.yml 
conda activate minigptv
conda install pytorch==2.0.0 torchvision==0.15.0 torchaudio==2.0.0 pytorch-cuda=11.8 -c pytorch -c nvidia
pip install huggingface-hub==0.18.0 matplotlib==3.7.0 psutil==5.9.4
pip install iopath pyyaml==6.0 regex==2022.10.31 tokenizers==0.13.2 tqdm==4.64.1 transformers==4.30.0
pip install timm==0.6.13 webdataset==0.2.48 omegaconf==2.3.0 opencv-python==4.7.0.72 decord==0.6.0
pip install peft==0.2.0 sentence-transformers gradio==3.47.1 accelerate==0.20.3 bitsandbytes==0.37.0 scikit-image visual-genome wandb 

下载模型

下载Llama-2-7b-chat-hf

使用魔塔api下载,或者本地下载后上传到/root/autodl-tmp/MiniGPT-4

modelscope download --model shakechen/Llama-2-7b-chat-hf --local_dir /root/autodl-tmp/MiniGPT-4/Llama-2-7b-chat-hf/

下载MiniGPT-v2 (after stage-3)模型

下载后上传到/root/autodl-tmp/MiniGPT-4

MiniGPT-v2 (after stage-3)

修改 autodl-tmp/MiniGPT-4/minigpt4/configs/models/minigpt_v2.yaml的模型路径

修改 autodl-tmp/MiniGPT4/eval_configs/minigptv2_eval.yaml 的模型路径

模型部署

cd /root/autodl-tmp/MiniGPT-4
python demo_v2.py --cfg-path eval_configs/minigptv2_eval.yaml  --gpu-id 0

报错如下

按步骤解决

  1. 下载 https://cdn-media.huggingface.co/frpc-gradio-0.2/frpc_linux_amd64

  2. 重命名文件: frpc_linux_amd64_v0.2

  3. 移动文件: mv frpc_linux_amd64_v0.2 /root/miniconda3/envs/minigptv/lib/python3.9/site-

packages/gradio

  1. 给权限:chmod +x /root/miniconda3/envs/minigptv/lib/python3.9/site-packages/gradio/frpc_linux_amd64_v0.2

DEMO启动

cd /root/autodl-tmp/MiniGPT-4
python demo_v2.py --cfg-path eval_configs/minigptv2_eval.yaml  --gpu-id 0

模型使用

进入Demo网页:

Video-LLaVA模型的介绍和使用

创建实例

配置和MiniGPT-4的实例配置一致。

配置环境

cd /root/autodl-tmp/
#git clone https://github.com/PKU-YuanGroup/Video-LLaVA
git clone --depth 1 https://github.com/PKU-YuanGroup/Video-LLaVA
conda create -n videollava python=3.10 -y

初始化conda init然后重启shell;

安装环境

conda activate videollava
pip install --upgrade pip  # enable PEP 660 support
cd /root/autodl-tmp/Video-LLaVA
pip install -e .
pip install -e ".[train]"
pip install flash-attn --no-build-isolation
pip install decord opencv-python git+https://github.com/facebookresearch/py torchvideo.git@28fe037d212663c6a24f373b94cc5d478c8c1a1d

倒数第二行卡死

pip install flash-attn --no-build-isolation卡死

最后一行报错

pip install decord opencv-python git+https://github.com/facebookresearch/py torchvideo.git@28fe037d212663c6a24f373b94cc5d478c8c1a1d报错

下载模型

export HF_HOME=/root/autodl-tmp/huggingface-cache/ 
export HF_ENDPOINT=https://hf-mirror.com
cd /root/autodl-tmp/Video-LLaVA
# 下载模型
huggingface-cli download --resume-download --local-dir-use-symlinks False LanguageBind/Video-LLaVA-7B-hf --local-dir Video-LLaVA-7B-hf

修改 /root/autodl-tmp/Video-LLaVA/videollava/serve/gradio_web_server.py的模型路径

模型运行

python -m  videollava.serve.gradio_web_server
# BLIP和BLIP 2模型的使用 ## BLIP模型使用 ### 创建实例 ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729059190717-5b85ac53-a0cc-475d-bb37-9cae38c08ca0.png) ### 环境配置 #### 克隆BLIP代码库 ```plain cd /root/autodl-tmp/ git clone https://github.com/salesforce/BLIP.git #如果你使用的是conda环境,可以尝试通过conda来安装tokenizers,这样可以避免直接编译的问题: conda install -c conda-forge tokenizers ``` ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729057400664-4e5b5dab-a812-404e-9066-e28a98a9121e.png) #### 配置环境 ```plain cd BLIP/ pip install -r requirements.txt ``` ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729057488482-16d0f69b-4cd4-4708-819b-030d4ea8eac6.png) #### 检查pytorch是否可以调⽤GPU ```plain python import torch # 检查是否有可⽤的CUDA设备 if torch.cuda.is_available(): print("CUDA is available. PyTorch can use GPU.") print("Number of GPUs available: ", torch.cuda.device_count()) print("Name of the CUDA device: ", torch.cuda.get_device_name(0)) else: print("CUDA is not available. PyTorch can only use CPU.") ``` ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729059880055-0d13de15-e65e-450c-9f3d-f41cae80dc89.png) ### 下载模型 #### 下载 bert-base-uncased模型 ```plain pip install -U huggingface_hub # 设置huggingface⽹站镜像 export HF_HOME=/root/autodl-tmp/huggingface-cache/ export HF_ENDPOINT=https://hf-mirror.com cd /root/autodl-tmp/BLIP/ # 下载模型 huggingface-cli download --resume-download --local-dir-use-symlinks False google-bert/bert-base-uncased --local-dir bert-base-uncased ``` ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729066511444-ee36f4e1-065d-4487-8bfb-bec4291fa039.png) #### 下载Image Captioning模型 + 本地下载模型,然后上传到autodl:[Image Captioning模型](https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base_caption_capfilt_large.pth) + 上传到/root/autodl-tmp/BLIP/路径下: ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729060294119-1af1dcf2-09e8-4dee-85e5-ef0672e54ad5.png) ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729060425389-45520b60-e891-42f2-834a-d5988a42e096.png) 上传成功。 #### 下载VQA模型 + 本地下载模型,然后上传到autodl:[VQA模型](https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base_vqa_capfilt_large.pth) + 上传到/root/autodl-tmp/BLIP/路径下: ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729060556801-84952f55-9763-4967-a533-a685a317c833.png) ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729060573101-2636afae-60b7-4a1a-8ec8-add10e49a4ef.png) 上传成功。 ### 执行任务 #### 模型执行 Image Captioning 任务 > Image Captioning(图像生成描述)任务是计算机视觉和自然语言处理领域中的一个交叉任务,其要求模型能够自动生成描述给定图像内容的文本。这个任务不仅需要模型能理解图像中的视觉信息,还需要将这些信息转换成自然语言的描述。 > + 在/root/autodl-tmp/BLIP下创建一个jupyter note文件 ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729061096231-76708df1-2453-4e91-a9f5-63db7135c1a6.png) 这里我命名为Image_Captioning_Demo.ipynb。 + 创建处理图片的函数 ```plain from PIL import Image import requests import torch from torchvision import transforms from torchvision.transforms.functional import InterpolationMode device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') def load_demo_image(image_size,device): # 填入想要操作的图片url img_url = 'https://wallpaperaccess.com/full/497579.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert ('RGB') w,h = raw_image.size display(raw_image.resize((w//5,h//5))) transform = transforms.Compose([ transforms.Resize((image_size,image_size),interpolation=InterpolationMode.BICUBIC), transforms.ToTensor(), transforms.Normalize((0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711)) ]) image = transform(raw_image).unsqueeze(0).to(device) return image ``` ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729062080668-881aa821-64e2-4bf4-acc0-e697bd6b179b.png) + 使用微调的 BLIP 模型(Image Captioning模型)执行图像的Image Captioning生成 ```plain from models.blip import blip_decoder image_size = 384 image = load_demo_image(image_size=image_size, device=device) # 模型地址 model_url = '/root/autodl-tmp/BLIP/model_base_caption_capfilt_large.pth' model = blip_decoder(pretrained=model_url, image_size=image_size, vit='base') model.eval() model = model.to(device) with torch.no_grad(): # beam search caption = model.generate(image, sample=False, num_beams=3, max_length=20, min_length=5) # nucleus sampling # caption = model.generate(image, sample=True, top_p=0.9, max_length=20, min_length=5) print('caption: '+caption[0]) ``` ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729062515331-0e33aeed-eb3c-47ec-8760-afe05f16a8b9.png) + 测试结果 ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729067735674-75cbcc85-8d01-48e1-be2a-f15c9c573d83.png) #### 模型执行VQA任务 > VQA任务:给定一张图像和一个自然语言问题,要求机器生成一个自然语言答案。 > + 在/root/autodl-tmp/BLIP下创建一个jupyter note文件。 ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729063235836-7a5a4d5b-beb2-46e5-bdb8-2efe18ecf1cd.png) + 创建处理图片的函数 ```plain from PIL import Image import requests import torch from torchvision import transforms from torchvision.transforms.functional import InterpolationMode device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') def load_demo_image(image_size,device): img_url = 'https://wallpaperaccess.com/full/497579.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') w,h = raw_image.size display(raw_image.resize((w//5,h//5))) transform = transforms.Compose([ transforms.Resize((image_size,image_size),interpolation=Interpolat ionMode.BICUBIC), transforms.ToTensor(), transforms.Normalize((0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711)) ]) image = transform(raw_image).unsqueeze(0).to(device) return image ``` ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729065696259-13c61164-e740-49a6-aede-160aa66f0fa5.png) + 模型执行VQA任务 ```plain from models.blip_vqa import blip_vqa image_size = 480 image = load_demo_image(image_size=image_size, device=device) model_url = '/root/autodl-tmp/BLIP/model_base_vqa_capfilt_large.pth' model = blip_vqa(pretrained=model_url, image_size=image_size, vit='base') model.eval() model = model.to(device) question = 'How many animals are in the picture?' # question = 'Describe this picture?' with torch.no_grad(): answer = model(image, question, train=False, inference='generate') print('answer: '+answer[0]) ``` ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729065706095-46d738cf-64ff-4ee2-873e-20b6b7645f70.png) + 测试结果 ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729067757656-835fa7fc-2d2a-4e46-9c07-553efb90b46d.png) ## BLIP2模型使用 ### 创建实例 使用BLIP模型相同的实例。 ### 环境配置 ```plain pip install -U huggingface_hub export HF_HOME=/root/autodl-tmp/huggingface-cache/ export HF_ENDPOINT=https://hf-mirror.com cd /root/autodl-tmp/ pip install git+https://github.com/huggingface/transformers.git mkdir BLIP2 ``` ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729069884767-7ef4cc82-b0a3-40fb-97d0-396945923871.png) ### 下载模型 使用hugging face镜像网站下载或者手动下载后上传 ```plain cd /root/autodl-tmp/BLIP2/ # 下载模型 huggingface-cli download --resume-download --local-dir-use-symlinks False Salesforce/blip2-opt-2.7b --local-dir blip2-opt-2.7b ``` ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729069971417-23cb9883-81c9-4020-ab3a-f6941dc46ad7.png) ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729070737540-38f52b70-4cd9-4029-bb39-f963213fa2e3.png) ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729071114585-c03f6cca-83fb-4092-aaf4-42739b12694b.png) ### 执行任务 在/root/autodl-tmp/BLIP2/下创建jupyter note文件。 + 基于URL获得图片 ```plain import requests from PIL import Image # url = 'https://img95.699pic.com/photo/40242/9317.jpg_wh860.jpg' url = 'https://wallpaperaccess.com/full/497579.jpg' image = Image.open(requests.get(url, stream=True).raw).convert('RGB') display(image.resize((596, 437))) ``` ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729564609068-cdd3c325-d8fa-4622-b0dc-cf9d416670c5.png) + 导入模型 ```plain from transformers import AutoProcessor, Blip2ForConditionalGeneration import torch processor = AutoProcessor.from_pretrained("/root/autodl-tmp/BLIP2/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("/root/autodl-tmp/BLIP2/blip2-opt-2.7b", torch_dtype=torch.float16) device = "cuda" if torch.cuda.is_available() else "cpu" model.to(device) ``` ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729564639053-5f2d8985-e693-44d1-b2c1-0e2a7c2590b6.png) #### 执行image caption任务 ```plain inputs = processor(image, return_tensors="pt").to(device, torch.float16) generated_ids = model.generate(**inputs, max_new_tokens=20) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip() print(generated_text) ``` ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729564655741-c3ef3acd-d278-4c09-bec5-e48d496039f4.png) #### 执行image caption with prompt任务 > 根据给定的图像和自然语言提示生成更为精准和相关的图像描述。 > > 通过提供文本提示来扩展图像字幕生成,模型将在给定图像的情况下接着提示词往下补充。 > ```plain prompt = "this is a photo of" inputs = processor(image, text=prompt, return_tensors="pt").to(device, torch.float16) generated_ids = model.generate(**inputs, max_new_tokens=20) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip() print(generated_text) ``` ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729564671991-9052b3be-e8cc-49d4-accf-0b7e7ccb5040.png) #### 执行VQA任务 用于视觉问答时,提示必须遵循特定格式: "Question: {} Answer:” ```plain prompt = "Question: What is the cat look like? Answer:" inputs = processor(image, text=prompt, return_tensors="pt").to(device, torch.float16) generated_ids = model.generate(**inputs, max_new_tokens=10) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip() print(generated_text) ``` ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729564704035-946b5114-c78d-44f2-bb30-dc74c0850a74.png) #### 基于聊天的提示 > 1. 通过拼接对话中每轮的问题和回答来创建类似 ChatGPT 的体验。 > > 2. 用某个提示来问模型,模型会为它生成一个答案,可以把这一问一答拼接到对话中。然后再来一轮,这样就把上下文 (context) 建立起来了。 > > 3. 上下文不能超过 512 tokens,因为 BLIP-2 使用的语言模型 (T5) 的上下文⻓度为512 tokens。 > ```plain context = [ ("What is the cat look like?", "The cat is a small, black and white cat"), ("Where are they?", "On the grass.") ] question = "What for?" template = "Question: {} Answer: {}." prompt = " ".join([template.format(context[i][0], context[i][1]) for i in range(len(context))]) + " Question: " + question + " Answer:" print(prompt) inputs = processor(image, text=prompt, return_tensors="pt").to(device, torch.float16) generated_ids = model.generate(**inputs, max_new_tokens=10) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip() print(generated_text) ``` ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729565083291-ca3c6a9d-9095-4706-8233-f9931d468066.png) # MiniGPT-V2模型 ## 创建实例 ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729072902681-5db86b93-903b-42fb-9424-211c887e2959.png) ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729072944393-c77da2db-69cf-48c4-aab1-f391889c2399.png) ## 环境配置 + 下载代码 ```plain cd /root/autodl-tmp/ git clone https://github.com/Vision-CAIR/MiniGPT-4.git cd MiniGPT-4 ``` + 初始化conda init然后重启shell + 到/root/autodl-tmp/MiniGPT-4下复制一份environment.yml ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729074114659-b7503d6d-7629-492d-861d-240bf3ea77eb.png) + environment-Copy1.yml修改为: ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729074158719-00dae531-e4e2-4db8-9148-f5950921fe3f.png) + 创建conda环境 ```plain cd /root/autodl-tmp/MiniGPT-4 conda env create -f environment-Copy1.yml conda activate minigptv conda install pytorch==2.0.0 torchvision==0.15.0 torchaudio==2.0.0 pytorch-cuda=11.8 -c pytorch -c nvidia pip install huggingface-hub==0.18.0 matplotlib==3.7.0 psutil==5.9.4 pip install iopath pyyaml==6.0 regex==2022.10.31 tokenizers==0.13.2 tqdm==4.64.1 transformers==4.30.0 pip install timm==0.6.13 webdataset==0.2.48 omegaconf==2.3.0 opencv-python==4.7.0.72 decord==0.6.0 pip install peft==0.2.0 sentence-transformers gradio==3.47.1 accelerate==0.20.3 bitsandbytes==0.37.0 scikit-image visual-genome wandb ``` ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729075014080-6898502d-3877-44c7-89b4-1fd270fef579.png) ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729075002775-7b997137-7d12-4007-9411-b2b9819c3c17.png) ## 下载模型 ### 下载Llama-2-7b-chat-hf 使用魔塔api下载,或者本地下载后上传到/root/autodl-tmp/MiniGPT-4 ```plain modelscope download --model shakechen/Llama-2-7b-chat-hf --local_dir /root/autodl-tmp/MiniGPT-4/Llama-2-7b-chat-hf/ ``` ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729077955303-99f0a4d5-d1ff-42c0-ac17-6423f7fade07.png) ### 下载MiniGPT-v2 (after stage-3)模型 下载后上传到/root/autodl-tmp/MiniGPT-4 [MiniGPT-v2 (after stage-3)](https://drive.google.com/file/d/1HkoUUrjzFGn33cSiUkI-KcT-zysCynAz/view?usp=sharing) ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729076092235-783d7cc6-cd31-466b-9bb9-5ebd15561118.png) ### 修改 autodl-tmp/MiniGPT-4/minigpt4/configs/models/minigpt_v2.yaml的模型路径 ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729521879054-059efede-e3d5-41e1-98d1-18d8f2418c4c.png) ### 修改 autodl-tmp/MiniGPT4/eval_configs/minigptv2_eval.yaml 的模型路径 ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729521916838-8f7a55dc-ae6b-4744-bd1f-378d5d08e444.png) ## 模型部署 ```plain cd /root/autodl-tmp/MiniGPT-4 python demo_v2.py --cfg-path eval_configs/minigptv2_eval.yaml --gpu-id 0 ``` ### 报错如下 ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729522538465-5ce7c5a3-0618-4a24-bc94-469d6b737fdd.png) ### 按步骤解决 1. 下载 [https://cdn-media.huggingface.co/frpc-gradio-0.2/frpc_linux_amd64](https://cdn-media.huggingface.co/frpc-gradio-0.2/frpc_linux_amd64) 2. 重命名文件: frpc_linux_amd64_v0.2 3. 移动文件: mv frpc_linux_amd64_v0.2 /root/miniconda3/envs/minigptv/lib/python3.9/site- packages/gradio 4. 给权限:chmod +x /root/miniconda3/envs/minigptv/lib/python3.9/site-packages/gradio/frpc_linux_amd64_v0.2 ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729523191041-d0aa7291-68f6-46e8-828b-aafc263a4141.png) ### DEMO启动 ```plain cd /root/autodl-tmp/MiniGPT-4 python demo_v2.py --cfg-path eval_configs/minigptv2_eval.yaml --gpu-id 0 ``` ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729523361541-e9e100b4-a70c-4786-9fa2-5c7ba9d8125d.png) ## 模型使用 进入Demo网页: ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729523575745-d94f6d59-b3ee-4432-92e5-725c53ae90b9.png) ## # Video-LLaVA模型的介绍和使用 ## 创建实例 配置和MiniGPT-4的实例配置一致。 ## 配置环境 ```plain cd /root/autodl-tmp/ #git clone https://github.com/PKU-YuanGroup/Video-LLaVA git clone --depth 1 https://github.com/PKU-YuanGroup/Video-LLaVA conda create -n videollava python=3.10 -y ``` 初始化conda init然后重启shell; 安装环境 ```plain conda activate videollava pip install --upgrade pip # enable PEP 660 support cd /root/autodl-tmp/Video-LLaVA pip install -e . pip install -e ".[train]" pip install flash-attn --no-build-isolation pip install decord opencv-python git+https://github.com/facebookresearch/py torchvideo.git@28fe037d212663c6a24f373b94cc5d478c8c1a1d ``` ### 倒数第二行卡死 pip install flash-attn --no-build-isolation卡死 ### 最后一行报错 pip install decord opencv-python git+[https://github.com/facebookresearch/py](https://github.com/facebookresearch/py) torchvideo.git@28fe037d212663c6a24f373b94cc5d478c8c1a1d报错 ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729567265468-a1212b00-d067-46e0-b5b3-50d98bf52ec1.png) ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729608114141-706d5ce6-6daa-42ac-9195-1417209c02ce.png) ## 下载模型 ```plain export HF_HOME=/root/autodl-tmp/huggingface-cache/ export HF_ENDPOINT=https://hf-mirror.com cd /root/autodl-tmp/Video-LLaVA # 下载模型 huggingface-cli download --resume-download --local-dir-use-symlinks False LanguageBind/Video-LLaVA-7B-hf --local-dir Video-LLaVA-7B-hf ``` ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729607373146-9ab49dbf-8cd8-49f8-ae6a-04febbb0639b.png) 修改 /root/autodl-tmp/Video-LLaVA/videollava/serve/gradio_web_server.py的模型路径 ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729593240462-4aba3eea-f940-4963-8091-6c17dcd1414a.png) ![](https://cdn.nlark.com/yuque/0/2024/png/48516026/1729593260769-99ceef1f-2497-4d89-bc19-0abd872b9791.png) ## 模型运行 ```plain python -m videollava.serve.gradio_web_server ```
Sign in to join this conversation.
No Label
No Milestone
No project
No Assignees
1 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: HswOAuth/llm_share#27
No description provided.