您现在的位置是:首页 > 技术教程 正文

text-generation-webui加载codellama报错DLL load failed while importing flash_attn_2_cuda: 找不到指定的模块。

admin 阅读: 2024-03-27
后台-插件-广告管理-内容页头部广告(手机)

使用text-generation-webui加载codellama,报错:

  1. Traceback (most recent call last):
  2. File "C:\Users\Ma\AppData\Roaming\Python\Python310\site-packages\transformers\utils\import_utils.py", line 1353, in _get_module
  3. return importlib.import_module("." + module_name, self.__name__)
  4. File "D:\Anaconda\Anaconda\envs\codellama\lib\importlib_init_.py", line 126, in import_module
  5. return _bootstrap._gcd_import(name[level:], package, level)
  6. File "", line 1050, in _gcd_import
  7. File "", line 1027, in _find_and_load
  8. File "", line 1006, in _find_and_load_unlocked
  9. File "", line 688, in _load_unlocked
  10. File "", line 883, in exec_module
  11. File "", line 241, in _call_with_frames_removed
  12. File "C:\Users\Ma\AppData\Roaming\Python\Python310\site-packages\transformers\models\llama\modeling_llama.py", line 48, in
  13. from flash_attn import flash_attn_func, flash_attn_varlen_func
  14. File "C:\Users\Ma\AppData\Roaming\Python\Python310\site-packages\flash_attn_init_.py", line 3, in
  15. from flash_attn.flash_attn_interface import (
  16. File "C:\Users\Ma\AppData\Roaming\Python\Python310\site-packages\flash_attn\flash_attn_interface.py", line 8, in
  17. import flash_attn_2_cuda as flash_attn_cuda
  18. ImportError: DLL load failed while importing flash_attn_2_cuda: 找不到指定的模块。
  19. The above exception was the direct cause of the following exception:
  20. Traceback (most recent call last):
  21. File "E:\模型\text-generation-webui\text-generation-webui\modules\ui_model_menu.py", line 209, in load_model_wrapper
  22. shared.model, shared.tokenizer = load_model(shared.model_name, loader)
  23. File "E:\模型\text-generation-webui\text-generation-webui\modules\models.py", line 85, in load_model
  24. output = load_func_map[loader](model_name)
  25. File "E:\模型\text-generation-webui\text-generation-webui\modules\models.py", line 155, in huggingface_loader
  26. model = LoaderClass.from_pretrained(path_to_model, **params)
  27. File "C:\Users\Ma\AppData\Roaming\Python\Python310\site-packages\transformers\models\auto\auto_factory.py", line 565, in from_pretrained
  28. model_class = _get_model_class(config, cls._model_mapping)
  29. File "C:\Users\Ma\AppData\Roaming\Python\Python310\site-packages\transformers\models\auto\auto_factory.py", line 387, in _get_model_class
  30. supported_models = model_mapping[type(config)]
  31. File "C:\Users\Ma\AppData\Roaming\Python\Python310\site-packages\transformers\models\auto\auto_factory.py", line 740, in getitem
  32. return self._load_attr_from_module(model_type, model_name)
  33. File "C:\Users\Ma\AppData\Roaming\Python\Python310\site-packages\transformers\models\auto\auto_factory.py", line 754, in _load_attr_from_module
  34. return getattribute_from_module(self._modules[module_name], attr)
  35. File "C:\Users\Ma\AppData\Roaming\Python\Python310\site-packages\transformers\models\auto\auto_factory.py", line 698, in getattribute_from_module
  36. if hasattr(module, attr):
  37. File "C:\Users\Ma\AppData\Roaming\Python\Python310\site-packages\transformers\utils\import_utils.py", line 1343, in getattr
  38. module = self._get_module(self._class_to_module[name])
  39. File "C:\Users\Ma\AppData\Roaming\Python\Python310\site-packages\transformers\utils\import_utils.py", line 1355, in _get_module
  40. raise RuntimeError(
  41. RuntimeError: Failed to import transformers.models.llama.modeling_llama because of the following error (look up to see its traceback):
  42. DLL load failed while importing flash_attn_2_cuda: 找不到指定的模块。

一开始排查是以为transformers的版本不对,先确定了transformers的版本,transformers的版本应该大于4.35.0

把transformers升级为4.35.0后仍然报错

接着排查cuda和torch的版本

最后发现是cuda版本与torch版本不匹配

  1. >>> print(torch.version.cuda) # 检查CUDA版本
  2. >>> 11.8

控制台运行nvcc --version :

输出:

  1. nvcc: NVIDIA (R) Cuda compiler driver
  2. Copyright (c) 2005-2023 NVIDIA Corporation
  3. Built on Wed_Feb__8_05:53:42_Coordinated_Universal_Time_2023
  4. Cuda compilation tools, release 12.1, V12.1.66
  5. Build cuda_12.1.r12.1/compiler.32415258_0

最后解决:

先卸载原本的torch:

pip uninstall torch torchvision torchaudio

然后安装12.1的:

pip install torch torchvision torchaudio -f https://download.pytorch.org/whl/cu121/torch_stable.html

最后加载成功codellama

标签:
声明

1.本站遵循行业规范,任何转载的稿件都会明确标注作者和来源;2.本站的原创文章,请转载时务必注明文章作者和来源,不尊重原创的行为我们将追究责任;3.作者投稿可能会经我们编辑修改或补充。

在线投稿:投稿 站长QQ:1888636

后台-插件-广告管理-内容页尾部广告(手机)
关注我们

扫一扫关注我们,了解最新精彩内容

搜索