-
-
Notifications
You must be signed in to change notification settings - Fork 10.1k
Description
Your current environment
Since at least the last few weeks (mid August), but probably much further back.
🐛 Describe the bug
While benchmarking P/D I realized that --random-input-len=1
results in an empty prompt for some models, specifically DeepSeek-R1.
This is due to this chunk in the random dataset:
num_special_tokens = int(tokenizer.num_special_tokens_to_add())
real_input_len = max(0, int(input_len) - num_special_tokens)
For deepseek-r1 and several other tested models (facebook/opt-125m
) tokenizer.num_special_tokens_to_add()
returns 1, meaning the prompt is empty. It also means in many cases we generate fewer tokens than we expect, which throws off perf calculations.
When running a benchmark, I would expect the actual prompt to contain the number of random input tokens specified. Num special tokens should not be counted as part of the expected random input tokens (or we should specify that in bench via another parameter).
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.