【求助帖】多模态课程启动 Video-LLaVA 报错 #298
Labels
No Label
bug
duplicate
enhancement
help wanted
invalid
question
wontfix
No Milestone
No project
No Assignees
4 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: HswOAuth/llm_course#298
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
出现
(videollava) root@autodl-container-7b7a41b559-cdd3bc5f:~/autodl-tmp/Video-LLaVA# python -m videollava.serve.gradio_web_server
[2024-10-26 12:28:03,631] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect)
/root/miniconda3/envs/videollava/lib/python3.10/site-packages/torchvision/transforms/_functional_video.py:6: UserWarning: The 'torchvision.transforms._functional_video' module is deprecated since 0.12 and will be removed in the future. Please use the 'torchvision.transforms.functional' module instead.
warnings.warn(
/root/miniconda3/envs/videollava/lib/python3.10/site-packages/torchvision/transforms/_transforms_video.py:22: UserWarning: The 'torchvision.transforms._transforms_video' module is deprecated since 0.12 and will be removed in the future. Please use the 'torchvision.transforms' module instead.
warnings.warn(
/root/miniconda3/envs/videollava/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:5: UserWarning: The torchvision.transforms.functional_tensor module is deprecated in 0.15 and will be removed in 0.17. Please don't rely on it. You probably just need to use APIs in torchvision.transforms.functional or in torchvision.transforms.v2.functional.
warnings.warn(
You are using a model of type video_llava to instantiate a model of type llava. This is not supported for all configurations of models and can yield errors.
上面这些错误
跟我替换了这条pip install flash-attn --no-build-isolation命令有关系吗?
主要是一直卡着不动我就手动下了个
flash_attn-2.6.3+cu118torch2.0cxx11abiFALSE-cp310-cp310-linux_x86_64.whl
第一次是能正常运行的,然后重启就不行了。
怎么解决
运行到这里卡了,tokenizer可以加载,到模型就卡住了







不过运行下面代码
差不多1分钟左右加载成功了
多等几分钟,就可以加载了,但是最终显示错误
打开下面的文件,添加两段代码就可以成功运行了
后面下载了那些文件还是不能共享url,报下面错误
还是老老实实开通到吧》。。
后面测试样例又报错,看来前面config不能那么改
@rebibabo 这个问题解决了吗
