Skip to content

Conversation

@Hendrik-code
Copy link
Collaborator

No description provided.

…to change it, and also using argmax instead of sum and inverting at the correct order to avoid multiplying the effect of the transformation out of proportions
@Hendrik-code Hendrik-code self-assigned this Jan 14, 2026
@Hendrik-code Hendrik-code added bug Something isn't working enhancement New feature or request labels Jan 14, 2026
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR fixes issues with the in/out segmentation region selection logic and adds the mix_prob parameter to transform classes. The fix addresses the incorrect handling of multi-channel segmentation masks and reorders operations in transform application.

Changes:

  • Fixed _apply_region_mode function to correctly handle multi-channel segmentation masks by using argmax before applying mode inversion
  • Added mix_prob parameter to RandomInverseGPU and RandomHistogramEqualizationGPU classes
  • Reordered operations in RandomConvTransformGPU to apply mix_prob mixing before region selection
  • Updated configuration values for various transform parameters
  • Code formatting improvements (trailing whitespace removal, function signature formatting)

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 4 comments.

File Description
auglab/transforms/gpu/contrast.py Fixed segmentation mask handling logic, added mix_prob parameter support, reordered transform operations
auglab/transforms/gpu/transforms.py Added mix_prob parameter to transform instantiations, formatting improvements
auglab/configs/transform_params_gpu.json Updated transform configuration parameters including probabilities and in_seg/out_seg values

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +940 to +943
# Mix with original based on mix_prob
if torch.rand(1).item() < self.mix_prob:
alpha = torch.rand(1, device=input.device)
channel_data[b] = alpha * orig[b] + (1 - alpha) * channel_data[b]
Copy link

Copilot AI Jan 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The mix_prob operation is applied per batch element inside the loop, which means each batch item gets a different random alpha value. This differs from the RandomConvTransformGPU where mixing happens once for all batch elements. Consider whether this per-batch randomization is intentional or if consistency across transforms is desired.

Copilot uses AI. Check for mistakes.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@NathanMolinier that is actually a good question, what behavior do we want. We should be consistent.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both are fine, and should have a similar impact onto the training. I would not worry too much on this

Comment on lines +97 to +98
"in_seg": 0,
"out_seg": 0,
Copy link

Copilot AI Jan 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The in_seg and out_seg values are set to 0 (integer) instead of 0.0 (float) for FunctionTransform, while all other transforms use float notation (0.0, 0.1, etc.). For consistency and to avoid potential type issues, use 0.0 for float values throughout the configuration.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants