You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Can you tell the exact procedure used to reproduce the results in the paper?
According to me, first we need to train the model on the CLEAR/MLLMU-bench for epoch epochs. Then, unlearning code runs for finetune_epochs. Is there anything else??
What are the exact set of hyperparameters used??
It feels like the NPO code in the repository is wrong. It is using the retainset dataloader but NPO works only on the forget set.
Can you tell the exact procedure used to reproduce the results in the paper?
According to me, first we need to train the model on the CLEAR/MLLMU-bench for
epochepochs. Then, unlearning code runs forfinetune_epochs. Is there anything else??What are the exact set of hyperparameters used??
It feels like the NPO code in the repository is wrong. It is using the retainset dataloader but NPO works only on the forget set.