You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Oct 31, 2023. It is now read-only.
As mentioned in the caption of Table 4, "We distill a RoBERTa-Large model of 24 layers into a RoBERTa-Small model with 100× less parameters.", is that means the size of your Roberta-small is 3.5M? I cannot find the model in the official repo of Roberta. May I ask if you can share the pretrained RoBERTa-Small model? Thanks.
Hello,
As mentioned in the caption of Table 4, "We distill a RoBERTa-Large model of 24 layers into a RoBERTa-Small model with 100× less parameters.", is that means the size of your Roberta-small is 3.5M? I cannot find the model in the official repo of Roberta. May I ask if you can share the pretrained RoBERTa-Small model? Thanks.
Yiming