-
Notifications
You must be signed in to change notification settings - Fork 447
test script for deepseek #2902
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test script for deepseek #2902
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. 📢 Thoughts on this report? Let us know! |
dbed671 to
4f5a1c4
Compare
|
🤖 Hi @shuningjin, I've received your request, and I'm working on it now! You can track my progress in the logs for more details. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
📋 Review Summary
This Pull Request introduces new test scripts for DeepSeek models (v2-16b and v3-671b) and refactors existing test scripts for GPT-OSS, Llama4, and Mixtral. The changes aim to onboard DeepSeek models, improve consistency in script practices, and adjust logging verbosity.
🔍 General Feedback
- The refactoring of test scripts into modular
1_test_deepseek.shand2_test_deepseek.shfor DeepSeek v3 is a positive step for organization and clarity. - Consistent application of
absl.logging.set_verbosity(absl.logging.INFO)across relevant Python files is good for debugging and monitoring. - The general updates to how
BASE_OUTPUT_PATHis handled in the shell scripts improve robustness. - Consider reviewing the commented-out training and fine-tuning steps in the DeepSeek test scripts to ensure they are intentionally disabled and to add explanatory comments if needed.
- Ensure all golden logit paths are robust and not dependent on transient local paths or specific dates.
RissyRan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work! Also thanks for keeping them up-to-date! A few minor comments
| idx=$(date +%Y-%m-%d) | ||
|
|
||
| # By default, we'll use "llama4-17b-16e" | ||
| if [ -z "${MODEL_VARIATION}" ]; then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to set this from XL ML? or we cold direct export MODEL_VARIATION="llama4-17b-16e" without if/else
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For other models, we have separate test scripts for each variant. For llama4, this serves as the united script. Do we need to account for Maverick as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. I think we should do something similar to GPT-OSS models. By passing the type instead of duplicate the scripts. No need to be in this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Different variant may need customization in tests, for instance
- forward logit check criteria
- parallelism as they are typically run on different devices (e.g., v5p-8 vs. v5p-128)
I would prefer keeping disentangled scripts for different variants when possible.
Actually, for the llama4 case, the logit criteria is set for scout. this might not work for maverick. Need to follow this up.
RissyRan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the updates!
| idx=$(date +%Y-%m-%d) | ||
|
|
||
| # By default, we'll use "llama4-17b-16e" | ||
| if [ -z "${MODEL_VARIATION}" ]; then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. I think we should do something similar to GPT-OSS models. By passing the type instead of duplicate the scripts. No need to be in this PR.
|
For deepseek script, updated moe strategy to tokamax_gmm for training and decoding. Tests pass: |
parambole
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thank you for making these changes. I have left a nit.
c018d3a to
00219f1
Compare
c018d3a to
79960df
Compare
79960df to
a8f8e24
Compare
Description
Main change
onboard test for deepseek2-16b, plan to run full workflow in XLML on v5p-8
end_to_end/tpu/deepseek/v2-16b/test_deepseek.sh: ckpt conversion, logit check, train, finetune, decodeonboard test for deepseek3-671b, plan to run step 2 in XLML on v5p-128
end_to_end/tpu/deepseek/v3-671b/1_test_deepseek.shend_to_end/tpu/deepseek/v3-671b/2_test_deepseek.sh: logit check (scan), decode (unscan). Note we leave out training and finetuning, as is covered by ubench nightly test.XLML PR: GoogleCloudPlatform/ml-auto-solutions#1112
FIXES: b/423057893
Other change
max_logging.logbehavior is changed due to PR#2873, need to set absl.logging.level to make logs visible for multiple scriptsTests
preparation
bash end_to_end/tpu/deepseek/v3-671b/2_test_deepseek.shbash end_to_end/tpu/deepseek/v2-16b/test_deepseek.shChecklist
Before submitting this PR, please make sure (put X in square brackets):
gemini-reviewlabel.