Skip to content

An issue when using "direct_inference" on external private data. #20

@mileswyn

Description

@mileswyn

Hello! Thanks for your great work SuPreM!

When I tried to infer your pretrained swinUnetR on my private contrast-enhanced CT, I found that the MONAI function "invertd" did not work. The input size of my CT volume is (512, 512, 709). After building the "test_loader" and put the input volume forward through the "val_transform" (which happened in "direct_inference/dataset/dataloader_test"), the image size changed to (248, 207 ,304), since it was resampled with "spacingd" and cropped with "CropForegroundd".
After inferred with "sliding_window_inference", it's prediction should be turned back to size (512, 512, 709) with "invert_transform", which is in line 68 in "inference.py",
BATCH = invert_transform(organ_name,batch,val_transforms).
However, the size of output prediction was still (248, 207 ,304). It seems that the "invertd" does not work in your code.

After debugging, I found it works after changing the output tensor into a MetaTensor.
The original version in "inference.py", line 67, is
batch[organ_name]=pseudo_label_single.cpu()
It should be
batch[organ_name]=MetaTensor(pseudo_label_single.cpu(), meta=image.meta)

Also, "invertd" does not work when ToTensord(keys=["image"]) in val_transform.
So I guess the line 454 in "direct_inference/dataset/dataloader_test.py" should be deleted.

These changes works for me. And I wonder if you could further test it and update your code.
Thanks again.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions