Skip to content

Qwen/Qwen2.5-7B on RTX4090

How many RTX4090 GPUs to run Qwen/Qwen2.5-7B.

Architecture

Field Value
model_type qwen2
attention GQA (heads=28, kv_heads=4, hd=128)
sliding_window 131072

Weights

Field Value Label
safetensors bytes 14.19 GB [verified]
params 7.6B [estimated]
quantization BF16 [verified]

Quantization reconciliation

Scheme Predicted Δ Error
FP16 14.18 GB 296.95 KB over 0.0%
BF16 ✓ 14.18 GB 296.95 KB over 0.0%
FP8 7.09 GB 7.09 GB over 100.0%
INT8 7.09 GB 7.09 GB over 100.0%
FP4_FP8_MIXED 3.90 GB 10.28 GB over 263.6%

Best: BF16 — safetensors header: all 73 weight tensors are BF16 (predicts 15,230,967,808 bytes, 0.0% error)

KV cache per request

Context tokens KV bytes
4,096 224.00 MB
32,768 1.75 GB
131,072 7.00 GB
Tier GPUs Weight/GPU Headroom/GPU Concurrent @ 128K
min 2 7.09 GB 13.02 GB 3
dev ★ 4 3.55 GB 16.57 GB 9
prod 7 2.03 GB 18.09 GB 10

Performance

  • Prefill latency 115 ms @ 2000 input tokens [estimated]
  • Cluster decode throughput 381 tok/s [estimated]
  • Max concurrent users 9
  • Bottleneck memory_capacity

Generated command

vllm serve Qwen/Qwen2.5-7B \
  --tensor-parallel-size 4 \
  --max-model-len 131072 \
  --gpu-memory-utilization 0.9

Generated by:

llm-cal Qwen/Qwen2.5-7B --gpu RTX4090 --engine vllm --lang en