Non-record: 12L Int5-MLP + Int6-Attn mixed quantization, val_bpb=1.1541#219
Open
alertcat wants to merge 5 commits intoopenai:mainfrom
Open
Non-record: 12L Int5-MLP + Int6-Attn mixed quantization, val_bpb=1.1541#219alertcat wants to merge 5 commits intoopenai:mainfrom
alertcat wants to merge 5 commits intoopenai:mainfrom
Conversation
Innovation over PR openai#198 (SOTA 1.1318): - 12 transformer layers (was 11): +2.2M params, better representation - Int5 quantization for MLP weights [-16,15]: 3 zero high bits - zstd compression 1.88x vs int6 1.51x, saves ~1.8MB - Funds the 12th layer within 16MB budget - Int6 kept for attention weights (precision-sensitive) - FA3 fallback for older PyTorch - LR=0.025 (validated as optimal in A/B testing) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
12L Int5-MLP + Int6-Attn + SmearGate + BigramHash + SWA
val_bpb: 1.1541 (sliding window stride=64, 3-seed mean) | 8xH100 SXM, 600s
Key Innovation: Mixed Int5/Int6 Quantization + 12 Layers
Instead of uniform int6 for all weights, we use precision-tiered quantization:
The ~1.8MB saved by int5 MLP funds a 12th transformer layer -- the deepest model submitted to date.
Results (3-seed, 8xH100 SXM)
Inter-seed range: 0.00035 (highly stable)
Architecture
Why Non-Record
This explores a novel compression direction (int5 MLP quantization) that trades quantization precision for model depth. The 12th layer adds representation capacity but each step is slower (107ms vs 81ms for 11L), so fewer training steps fit in 600s (~5,590 vs ~7,412). The depth-vs-speed tradeoff does not beat current SOTA (1.1318) but demonstrates int5 as a viable compression strategy for fitting deeper models within the 16MB budget.
Reproduction
Built on techniques from PR #198, PR #162, PR #180 (SmearGate, BigramHash, OrthoInit, int6 quantization, sliding window eval).