Skip to content

Decreasing train / validation loss with decreasing metrics like accuracy , f1-score etc. #9

@Parkjunghoons2

Description

@Parkjunghoons2

Hi, first of all, thank you for this amazing research and results.
I'm doing some research using nnPU with binary classification (with pytorch applications).

During my experiments, there is a problem that train loss & validation loss are decreasing just like the way loss decreases in the paper of nnPU,
but printed confusion matrix every 5 epochs are odd (scoring extremely low metrics suchs as precison, recall, f1-score etc)

I'm expecting the reason of this problem is accurred when each batch output of the model are mapped to the predicted class wrong because all same settings but different loss with cross entropy learns the data very well.

I'm mapping the output of the model to the calss by pred = torch.where(out[:batch_size] < 0, torch.tensor(-1), torch.tensor(1)), which seems nothing wrong about the code. Is there any opinion or information to deal with this situation?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions