Different result using Virtual Stack

Hi,

I have been using Vthe option “Virtual Stack” (VS) in Fiji when importing my raw scans due to memory issues with my computer.
I have now been able to use a more powerful computer in terms of power and now I can import the scans without the VS option.

I assumed that the quality would not be entirely the same using VS or not and I could verify it. Actually, when I used VS", the images had higher contrast than when the I didnt used the VS. Why this happen? Does anybody know the reason why using VS gives a different image (in terms of contrast) than when it is not checked?

Thank you!
Pedro

Hi Pedro @pedgalher ,

can you share an example data set? Furthermore, do you mean different in visualisation or different in quantitative contrast measurements?

Cheers,
Robert

Hi,

Unfortunately I cannot share any image due to non-disclosure agreements but I can share the histogram without virtual stack (left) and using it (right). The difference is massive for this sample scan.
I converted both scans to 8-bit. The original type (32-bit) gave the same difference in contrast anyway.
For other scans, the difference is very little.

This conversion is dangerous due to information loss. To learn why, just compare quantitative analysis before and after this conversion. The effect is similar to this one:

Did you measure the contrast or is it visual impression?

Can you share histograms of the original data?

Hi,
Sorry for my late answer. I have been in a businees trip and couldnt have a look at the data without the 8-bit conversion.

You made a very good point. I extracted the histograms from both using VS and importing the entire scan without the VS, and their histograms are the same with the data as 32-bit. Visually the one imported using VS has much better contrast than the other.

Now I have two questions:

  • Why is there a difference (really big difference actually) in the visual contrast if there is no difference in their respective histograms?

  • After 8-bit conversion (even if not recommended), why are they giving different histograms? Shouldn’t the information loss be the same for both scans regardless the process used for importing them (VS checked vs. unchecked)?

I dont know if you could answer them (maybe they are open questions that are yet yo to be addressed) but thank you very much for your help and your comments on this issue.

ImageJ computes brightness/contrast—which to be clear affects visualization only, not the actual raw sample values—based on known min/max data values. When using a virtual stack, the known min/max values are for the current slice only, whereas when you have the entire stack in memory, ImageJ can compute min/max globally over all planes. Doing a global brightness/contract calculation on a virtual stack with many slices would be slow because it would need to iterate over all the slices, loading them sequentially into memory.

The 8-bit conversion uses the current brightness/contrast (i.e. the displayed min/max) when rescaling the data: the displayed min becomes 0, the displayed max becomes 255, and everything in between is scaled linearly. If they are giving different histograms, it’s because your displayed min/max settings are different for the virtual stack and non-virtual stack.

See this section of the ImageJ manual:

4 Likes

Amazing answer!

Thank you so much!

The 8-bit conversion uses the current brightness/contrast (i.e. the displayed min/max) when rescaling the data: the displayed min becomes 0, the displayed max becomes 255, and everything in between is scaled linearly. If they are giving different histograms, it’s because your displayed min/max settings are different for the virtual stack and non-virtual stack.

So if I understood it correctly, applying the re-scaling on the stack without VS takes the global min/max, while the re-scaling over an stack imported via VS, re-scales each slice according to the min/maxf from that slice. Is that right?

1 Like

Hmm… that sounds right. (@haesleinhuepf @imagejan others do you know off the top of your head?) And it wasn’t something I was thinking about when I wrote my answer above. Did you verify that via experimentation?

1 Like

Hey @pedgalher,

I could imagine that the behavior you observed is a bug. Feel free to provide a short concise example reproducing the issue easing our life to trace the bug down.

In general, I recommend not scaling intensities using any fuzzy not-exactly-known method. Either you determine min/max intensity of the whole stack, or other percentiles, and rescale with them. This may take a long time if you process huge stacks. Thus, you can rescale intensities to a user defined range. In both cases it’s crucial that you know explicitly how rescaling is done. Just for the case reviewer #3 asks later on: “When converting images from 16 to 8-bit, you have information loss. Can you specify 16-bit min/max values corresponding to the min/max values of the 8-bit range?” :upside_down_face:

I hope that helps!

Cheers,
Robert

1 Like

Hi Robert,

Once I have some free time, I’ll definitively try and experiment with the 32-bit to 8-bit conversion checking/unchecking Virtual Stack. I really want to find an explanation to the differences in the histograms after such conversion. I’ll post here my findings.

Thank you both for your interest in this question

Pedro

Correct, with “global” meaning the min/max that’s currently set in the Brightness&Contrast dialog, which isn’t necessarily the same as the stack min/max.

Yes. The issue is caused by VirtualStack calling Opener#openImage() whenever you ask for a new slice (so e.g. when traversing the stack for rescaling):

… and in Opener.java, you can find a call to ip.resetMinAndMax() here:

… which I suppose (without more thorough investigation, so I might be wrong…) could explain the behavior you see.

Whether this is a bug or intended behavior that has been like that for a long time (and changing it will break other workflows that rely on the current behavior) probably depends on the point of view. @Wayne might have an idea how to improve the situation.

I definitely think it would be good if the behavior didn’t depend just on the choice virtual/non-virtual stack, while the remaining workflow is the same.

I agree. You can try to avoid rescaling here, or rescale to known min/max values before converting.

2 Likes

No, in both cases the rescaling will take into account your selected min/max values for the active slice, and apply these to the entire stack. Any other behaviour does not make sense, and changing this will break existing workflows.

That being said, virtual stacks should behave the same as in-memory stacks. If that is not the case, please post a macro reproducing your issue, so we can investigate this further. There used to be many such bugs in ImageJ some years ago but I reported them to Wayne and he fixed them and I have not seen any new bugs recently in regards to different behaviour of virtual vs real stacks.

Yes in general, converting to 8-bit leads to information loss, but not necessarily if you know exactly what you are doing. If your images are density calibrated, and you calculate a new calibration function that will map the values after conversion to be the same as before the conversion; the density values will then be the same after conversion to 8-bit and any analysis results should be the same. Your only loss is then the resolution of the number of greylevels which in most cases do not matter that much, unless you later need to change the min/max window again. You need a macro to do this conversion, as this recalculation of calibration will not happen by default. And it will also most likely not work if your density calibration is not linear.

Agree ! Conversion to 8 bit is only ok as long as you know what you are doing (see above) and keep a copy of the original data.

This should already be the case, any other behaviour might be a bug. I do this all the time (convert both virtual and non-virtual stacks) from 16-bit to 8 bit, selecting the contrast range of interest, recalibrate it, and save the converted data as a copy. Then it can be converted to PNG or AVI for easy sharing with customers, emphasizing the features of interest. I have not had any problems with that since several years.

However,

Another thing that you might have overlooked is that there is an option under Options-Appearance named “Auto contrast stacks”. Make sure that is not enabled, as that might be the cause of the inconsistent behaviour you see.

image

Also check this one:
image
You can here disable the min/max mapping of your current data to 0/255 in the 8-bit data. Only turn this off if you know what you are doing.

2 Likes

Thanks for your answers

I think this is the key why I have a different histogram after the 32-bit to 8-bit conversion. I recall that the displayed slice was not the same for both stacks (VS and non-VS), so according to this explanatation, I was applying a different pair of min/max values to the VS and the non-VS, and thus, the conversion was be different. I need to double check this fact. Is there a way to use the min/max of the entire scan to perform the 8-bit conversion instead of the pair of a single slice?

Another thing that you might have overlooked is that there is an option under Options-Appearance named “Auto contrast stacks”. Make sure that is not enabled, as that might be the cause of the inconsistent behaviour you see

They were un-checked, so still dont know why the appearance was different.

Final question (I’m a little bit confused now): After I did the 8-bit conversion, I used the plugin “Enhance Contrast” to the stack, checking the option “use stack histogram”. I believe that here the histogram normalization uses the min/max of the entire stack to linearly map the values in between, right?

None that I know of. However, it easy to just use the “Measure Stack” function to read through all slices and determine the min/max values from that. You can read the max value as an array from the result window using the Table.getColumn(columnName) and Array.getStatistics(array, min, max, mean, stdDev) functions. So this can be done with a few lines of macro code.

A more robust way to get visually consistent result would be to read the histogram of the entire stack and calculate percentiles from that as @haesleinhuepf suggested. Something like min = 2% percentile and max = 98 % percentile. Then your end result will not vary according to random dark or bright pixels in your data.

Yes that sounds right, I haven’t tried that myself. But it is probably better to do that before the conversion to 8-bit.

1 Like

Edit, I just did. There is an option “saturated pixels” which is some kind of percentile, I assume a number of 4% there would mean the same as 2%-98% percentile. I had forgot that the “Enhance Contrast” is there, it seems very useful in your case. So try it before the conversion, checking the process all slices and stack histogram options.

I think this will do what you want for determining the min/max values of the stack, so then you don’t need to create the macro I suggested. A low saturated percentage like 0.5 will probably also work well.

1 Like

Thanks for the idea. How can I use the extracted max/min from the entire stack to “manually” do the conversion to 8-bit? The way I do the conversion from 32-bit to 8-bit right now is just by clicking on Edit>Type>8-bit. Is there any other way to do it?

I think this will do what you want for determining the min/max values of the stack, so then you don’t need to create the macro I suggested. A low saturated percentage like 0.5 will probably also work well.
Yup, I normally use 0.2% but applied to the 8-bit stack. I have to try and do it to the 32-bit stack instead.

Pedro

You insert the min/max values you want using the Set button in the “Brightness/Contrast” dialog before the 8-bit conversion. Then 0 will correspond to the min value and 255 will correspond to the max value you set.

I tried it now on a 16-bit and found that it works as expected. However, it automatically applies the calculated min/max value, so it will therefore alter the original data. So be very careful with this; it is safest to do this on a duplicated stack.

The behaviour might be different on a 32-bit stack as “applying” a contrast to 32-bit data does not make sense. You can set a min/max value on a 32.bit stack as discussed earlier and that will determine the window used for the min/max mapping to 0/255 in 8-bit.

1 Like

Alright!

I think my doubts are now solved. I just want to thank you to all 4 of you who helped me with this issue. Really thank you. The data I’m dealing with is for a research paper I’m working right now and even though I had worked with ImageJ before, this was the first time I realy took time to look at the software and the data in depth.

I’m not wrong if I say that I have learnt more and more efficiently about how Fiji works from your answers that I would have learnt by trial-error.

1 Like

UPDATE ON FINDINGS!

Hi all,

I juts want to make some updates on this issue, how I have solved it and what are the takeaways from this issue just in case someone has a similar problem in the future. The solution and takeaways are just an extension of what the people have commented in this posts. I could have not solved it without their advice and knowledge.

  • The problem: I was getting strong differences in visual contrast from CT data imported via Virtual Stack (VS) and importing the data without VS. even though their histograms were the same. Also, the 8 bit conversion was providing a different histogram depending on the method used.

  • Solution/Finding: I found out that when the stack was imported without VS, the displayed slice is the middle one, in my case and for a 2000-slice CT data, it was the 1000. When the stack is imported with VS, the displayed slice is the 1st one. And because of this incredible small detail, all the problem came. The slice 1000th in the non-VS stack contain an INCREDIBLY SMALL BEAM HARDENING ARTEFACT that would mess up everything, distorting the histogram and making my stack to become incredibly dark. Once the the slice was placed in the first slice in the non-VS, and the Brightness/Contrast LUT was Reseted, it resulted in the same visual stack as in the VS stack. This was extremely surprising/exciting to discover and annoying for all the problem caused.

  • Takeaways:

  1. Always use the same slice brightness/contrast in non-VS and VS used for 8-bit conversion or just visualizing the data. This is just a confirmation of what @steinr and @haesleinhuepf has indicated previously. Both non-VS and VS methods take the displayed slice brightness/contrast to make the 8-bit (or any other) conversion.

  2. ALWAYS check that the slice you choose to make the type conversion does not contain artefacts that would mess up everything.

  3. When you change the displayed slice, make sure you click on “Reset” in the Brightness/Contrast window.

  4. As @steinr, consider using Enhance Contrast with saturated pixels (0.5% or something similar) to remove the spurious pixels and make a safe conversion to 16/8 bit.

These remarks might seem very basic for the expert people here, but I think they can be useful for amateur people (like me) with low experience with ImageJ.

Hope it helps!!
Pedro

2 Likes