-
-
Notifications
You must be signed in to change notification settings - Fork 10k
Description
Your current environment
The output of python collect_env.py
$ python collect_env.py
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 24.04.2 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version : Could not collect
CMake version : version 4.1.0
Libc version : glibc-2.39
==============================
PyTorch Info
==============================
PyTorch version : 2.8.0+rocm6.4
Is debug build : False
CUDA used to build PyTorch : N/A
ROCM used to build PyTorch : 6.4.43482-0f2d60242
==============================
Python Environment
==============================
Python version : 3.11.7 (main, Dec 15 2023, 18:12:31) [GCC 11.2.0] (64-bit runtime)
Python platform : Linux-6.14.0-29-generic-x86_64-with-glibc2.39
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : Could not collect
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration : AMD Radeon Graphics (gfx1151)
Nvidia driver version : Could not collect
cuDNN version : Could not collect
HIP runtime version : 6.4.43482
MIOpen runtime version : 3.4.0
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD RYZEN AI MAX+ 395 w/ Radeon 8060S
CPU family: 26
Model: 112
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 59%
CPU max MHz: 5187.0000
CPU min MHz: 599.0000
BogoMIPS: 5999.69
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx_vnni avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid bus_lock_detect movdiri movdir64b overflow_recov succor smca fsrm avx512_vp2intersect flush_l1d amd_lbr_pmc_freeze
Virtualization: AMD-V
L1d cache: 768 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 64 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Ghostwrite: Not affected
Vulnerability Indirect target selection: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; IBPB on VMEXIT only
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
==============================
Versions of relevant libraries
==============================
[pip3] conch-triton-kernels==1.2.1
[pip3] flake8==6.0.0
[pip3] mypy==1.8.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] numpydoc==1.5.0
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cufile-cu12==1.11.1.6
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton-rocm==3.4.0+gitf7888497
[pip3] pyzmq==25.1.2
[pip3] torch==2.8.0+rocm6.4
[pip3] torchaudio==2.8.0+rocm6.4
[pip3] torchvision==0.23.0+rocm6.4
[pip3] transformers==4.56.1
[pip3] transformers-stream-generator==0.0.5
[pip3] triton==3.3.1
[conda] _anaconda_depends 2024.02 py311_mkl_1
[conda] blas 1.0 mkl
[conda] conch-triton-kernels 1.2.1 pypi_0 pypi
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py311h5eee18b_1
[conda] mkl_fft 1.3.8 py311h5eee18b_0
[conda] mkl_random 1.2.4 py311hdb19cb5_0
[conda] numpy 1.26.4 py311h08b1b3b_0
[conda] numpy-base 1.26.4 py311hf175353_0
[conda] numpydoc 1.5.0 py311h06a4308_0
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cufile-cu12 1.11.1.6 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.26.2 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] pytorch-triton-rocm 3.4.0+gitf7888497 pypi_0 pypi
[conda] pyzmq 25.1.2 py311h6a678d5_0
[conda] torch 2.8.0+rocm6.4 pypi_0 pypi
[conda] torchaudio 2.8.0+rocm6.4 pypi_0 pypi
[conda] torchvision 0.23.0+rocm6.4 pypi_0 pypi
[conda] transformers 4.56.1 pypi_0 pypi
[conda] transformers-stream-generator 0.0.5 pypi_0 pypi
[conda] triton 3.3.1 pypi_0 pypi
==============================
vLLM Info
==============================
ROCM Version : 6.4.43484-123eb5128
vLLM Version : 0.10.2+gfx1151 (git sha: fx1151)
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled
GPU Topology:
============================ ROCm System Management Interface ============================
================================ Weight between two GPUs =================================
GPU0
GPU0 0
================================= Hops between two GPUs ==================================
GPU0
GPU0 0
=============================== Link Type between two GPUs ===============================
GPU0
GPU0 0
======================================= Numa Nodes =======================================
GPU[0] : (Topology) Numa Node: 0
GPU[0] : (Topology) Numa Affinity: -1
================================== End of ROCm SMI Log ===================================
==============================
Environment Variables
==============================
LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu:/opt/rocm/lib:
PYTORCH_NVML_BASED_CUDA_CHECK=1
TORCHINDUCTOR_COMPILE_THREADS=1
CUDA_MODULE_LOADING=LAZY
🐛 Describe the bug
$ vllm --version
DEBUG 09-10 18:43:10 [init.py:28] No plugins for group vllm.platform_plugins found.
DEBUG 09-10 18:43:10 [init.py:34] Checking if TPU platform is available.
DEBUG 09-10 18:43:10 [init.py:52] TPU platform is not available because: No module named 'libtpu'
DEBUG 09-10 18:43:10 [init.py:58] Checking if CUDA platform is available.
DEBUG 09-10 18:43:10 [init.py:82] Exception happens when checking CUDA platform: NVML Shared Library Not Found
DEBUG 09-10 18:43:10 [init.py:99] CUDA platform is not available because: NVML Shared Library Not Found
DEBUG 09-10 18:43:10 [init.py:106] Checking if ROCm platform is available.
DEBUG 09-10 18:43:10 [init.py:120] ROCm platform is not available because: No module named 'amdsmi'
DEBUG 09-10 18:43:10 [init.py:127] Checking if XPU platform is available.
DEBUG 09-10 18:43:10 [init.py:146] XPU platform is not available because: No module named 'intel_extension_for_pytorch'
DEBUG 09-10 18:43:10 [init.py:153] Checking if CPU platform is available.
INFO 09-10 18:43:10 [init.py:220] No platform detected, vLLM is running on UnspecifiedPlatform
DEBUG 09-10 18:43:19 [utils.py:168] Setting VLLM_WORKER_MULTIPROC_METHOD to 'spawn'
DEBUG 09-10 18:43:19 [init.py:36] Available plugins for group vllm.general_plugins:
DEBUG 09-10 18:43:19 [init.py:38] - lora_filesystem_resolver -> vllm.plugins.lora_resolvers.filesystem_resolver:register_filesystem_resolver
DEBUG 09-10 18:43:19 [init.py:41] All plugins in this group will be loaded. Set VLLM_PLUGINS
to control which plugins to load.
DEBUG 09-10 18:43:19 [parallel.py:437] Disabled the custom all-reduce kernel because it is not supported on current platform.
Traceback (most recent call last):
File "/home/modelscope/anaconda3/bin/vllm", line 7, in
sys.exit(main())
^^^^^^
File "/home/modelscope/ai/vllm/vllm/entrypoints/cli/main.py", line 46, in main
cmd.subparser_init(subparsers).set_defaults(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/modelscope/ai/vllm/vllm/entrypoints/cli/serve.py", line 64, in subparser_init
serve_parser = make_arg_parser(serve_parser)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/modelscope/ai/vllm/vllm/entrypoints/openai/cli_args.py", line 259, in make_arg_parser
parser = AsyncEngineArgs.add_cli_args(parser)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/modelscope/ai/vllm/vllm/engine/arg_utils.py", line 1769, in add_cli_args
parser = EngineArgs.add_cli_args(parser)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/modelscope/ai/vllm/vllm/engine/arg_utils.py", line 881, in add_cli_args
vllm_kwargs = get_kwargs(VllmConfig)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/modelscope/ai/vllm/vllm/engine/arg_utils.py", line 272, in get_kwargs
return copy.deepcopy(_compute_kwargs(cls))
^^^^^^^^^^^^^^^^^^^^
File "/home/modelscope/ai/vllm/vllm/engine/arg_utils.py", line 179, in _compute_kwargs
default = field.default_factory()
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/modelscope/anaconda3/lib/python3.11/site-packages/pydantic/_internal/_dataclasses.py", line 123, in init
s.pydantic_validator.validate_python(ArgsKwargs(args, kwargs), self_instance=s)
File "/home/modelscope/ai/vllm/vllm/config/init.py", line 1930, in post_init
raise RuntimeError(
RuntimeError: Failed to infer device type, please set the environment variable VLLM_LOGGING_LEVEL=DEBUG
to turn on verbose logging to help debug the issue.
$ pip show amdsmi
Name: amdsmi
Version: 6.4.2
Summary: AMD System Management Interface
Home-page:
Author: AMD
Author-email:
License:
Location: /home/modelscope/anaconda3/lib/python3.11/site-packages
Requires:
Required-by:
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.