Help Normalizing Large Stack Volumes for Trainable Weka Segmentation

I am working with retinal tomography volumes in TWS, measuring 1024px cubed (1024 slices)

The structure is roughly as such:

The area of interest is the white membrane, which I have highlighted in yellow in the below cross-sectional scan:

This membrane is difficult to image, and signal is very weak, even on this scan from the bleeding edge scanner. The stronger signal below is the retinal tissue, and is not an area of interest.

I had acceptable results using TWS and edge-detectors to enhance the membrane (see below), but I am struggling to normalize the data to create a classifier that can be generalized to future data.

Exposure and contrast varies from stack-to-stack, and I am not able to standardize this during acquisition.

I do not have any prior experience with normalization. I have tried using Running Z-Projector to take a running average of slices, and I have also tried analyzing the stack histogram to set min/max brightness values based on the range -2 to 1+ standard deviations from the mean. Both methods still result in poor generalizability of the classifier.

I don’t mind if the strong signal from the retinal tissue becomes over-exposed, but I am really struggling with the normalization. Any suggestions?