I’m a new user of ImageJ, and I’m interested in measuring and comparing the fluorescence intensity as part of a time course analysis on embryos at different developmental stages. For my experiments I have antibody-stained cells across a variety of conditions (wild-type vs. Knock out, developmental days etc.). I’ve read several ImageJ articles and forum posts about this topic, but I’m having difficulty connecting what I’ve read to create a “tailored” analysis protocol for my experiment. I’ve included a sample image below of the fluorescence I’m hoping to measure for each of the first 3 channels (ch 4 is my Hoechst). There are around 150 cells in each image. I’d like to count and measure the fluorescence intensity of each nuclei (which I am manually counting using cellcounter tool) in order to get an average intensity per cell. I have struggled having a good nuclear segmentation even when tried clearing my samples and changing acquisition parameters ( Z stack 5uM to 2uM) . My images are acquired with identical settings meaning exposure time, illumination setting, camera gain, etc. My images are currently in a .nd2 format, 16 bits, but can not upload them here in such format. I uploaded a sample using .tif.
Q1: I am not sure about the value I should use to represent my data. In brief, I open, substract background, despeckle, and set B&C prior to splitting channels and doing Max IP. Does this preprocessing have any influence somewhow in my Intensity measurements?
Q2: After generating a MaxIP of the merged 4 channels, I define the ROI for each nuclei of interest, I also define 5-10 ROI of the background. I export my results to an excel tablewhere I paste Area, IntDen, and RawIntDen as results per channel. According to Fiji tutorial, Integrated Density - Calculates and displays two values: “IntDen” (the product of Area and Mean Gray Value) and “RawIntDen” (the sum of the values of the pixels in the image or selection). Is it Okay to just use in a XY graph, Int Den vs Nuclei?
I am not sure If I should work with RawIntDen instead. I am actually working with the later but do not really understand the difference. I calculate using the averages (RawInt /Area)-(AvgRawIntCtrl/AvgAreaCtrl). Were the first terms (RawInt /Area), have to do with my nuclei of interest its respective area, and I substract the second (AvgRawIntCtrl/AvgAreaCtrl) that come from my Background control spots. Which method is the proper way of representing data?
Q3: My images are currently in a .nd2 format, 16bits, therefore my arbitraty units range between 86 and 16000. I’ve seen people having A.U ranging between 0-255 which I assume comes from converting to 8 bits image. Do I need to additionally convert my images 8-bit formats prior to analyzing them. Is this something that I would need to do to analyze my images? What purpose does converting the image to a different bit type serve?
Thank you in advance for your help!
Here I attached the metadata and one image for your reference.
To at least get started with your query, I’ve put some thoughts below. Some others might chime in too.
Q1 - Usually I would say pre-processing steps like the ones you are doing are done to get accurate segmentation of the images (your nuclei outlines) then once you have them, they are transferred to the raw image for measuring those regions. Brightness and contrast changes don’t change the underlying values so those are fine. Additionally, if you don’t want to try 3D segmentation and measurements, I would suggest looking at a different projection type than a maximum, i.e. an average or sum as both of these are, in my view, much preferable if you are going to do intensity claulations.
Q3 - More data is generally preferable if you aren’t having file size/ computational issues so I would stick to 16-bit. A lot of teaching material is done talking about 8-bit maybe due to convention but also that sticking to 0 - 255 can be a bit easier than 0-16383 or 0-65535 for explanations. In general, I think most fluoresence microscope sensors will be at least 12-bit.
So finally, I don’t think your image had finished uploading before you pressed post so could you try again? The ND2 would be fine too but you’d need to put it in an external location, e.g. Firefox Send, Google Drive, WeTransfer and link here. Otherwise a TIFF is fine too. I would like to see the 3D stack out of interest.
Thanks a lot for your swift reply. Please find attached a .tif copy. I have tried 3d segmentation but did not work OK for embryos due to nuclei being too close from each other in my region of interest. The Inner cell mass. I have just tried uploading a .tif file and seems that is not feasible atm. Could you try with this link? thanks a lot in advance!!! Have a great weekend!
first to your questions, then hopefully to a satisfying solution.
To question 1:
the ImageJ intrinsic function Subtract Background is good for background subtraction in a processing context to extract your features (as @lmurphy mentioned already). BUT: it is not quantitative. Thus, it will influence your measurements and make them more inaccurate.
Despeckle is an image filter, which non-linearly degrades your data. Is good in processing, not for the image in which you still need to measure pixel intensities
NEVER, EVER MUST the contrast or brightness be changes before intensity measurements. This is easy to perceive because if you change the pixel values to achieve a higher contrast, you will not be able to measure the original values.
Now the exception to this is (and @lmurphy might have refered to this) you can use the Reset button in the B&C dialog or “transiently” change the sliders, this is ok, BUT as soon as you make the change persistent in the image by pressing Apply, you again introduce changes to your pixels.
So, you can use this procedure on a copy of your image to extract the ROIs for the nuclei but you should measure on the original image. That said, a quantitative background subtraction by measuring the imaging background outside your sample should be considered.
In 3D, this a little bit more tricky because you would need to measure and subtract the background individually per slice.
When looking at your data, the background in the first 3 channels is roughly between 1 and 3% of the maximal image intensity used. So, that would be the influence it has on your measurement.
Furthermore, it is rather constant over all slices. I tested it in the 1st channel (magenta). Over 76 slices the mean value in a circular ROI in the background was around 500 gray values and its SD around 4 gray values. I would consider this deviation as possible to neglect.
So, you could measure the background average of all slices and subtract this value from the corresponding channel stack in one step. You can achieve that by splitting the channels of the original, make a ROI in a completely empty (free of sample) area, put this ROI to the ROI Manager ([Ctrl.] + [t])
Then choose the Multi measure option from the ROI Manager, measure all 76 slices per channel and in the results table go to →Results →Summarize… That will give you the average intensity in your ROI at the end of the table in the column mean. That is the value you can then subtract from all slices via →Process→Math→Subtract. Obviously, also this has a certain bias and error, but in that set od data that is rather low.
To question 2:
as @lmurphy pointed out… DO NOT use maximum projections for measurements, because they just give you the overall brightest pixels in your volume. Those have nothing to do with the actual real intensity distribution in your sample. Average or Sum would be possible but are pretty much error prone, because your objects are basically small spheres. In a projection you see their maximal extent and measure that. But seen in 3D, the upper and lower end of your nucleus is much smaller than the middle part and the surrounding background is included in the average value (the sum is less error prone, but it is not so convenient to compare sums since they depend also on the object size and not only on the intensity.
In your calculation you consider somewhat the background as well but first measure it with your signal and then make calculations. First subtracting then measuring might be more optimal (as technically explained above).
To question 3:
DO NOT convert your images to 8-bit. because the conversion will make you loose data and accuracy in your measurement. For Object extraction (speaking of thresholding etc.) this conversion is ok and makes the calculation faster. But your pixel values should be measured in the original as pointed out above.
So, much talking, now here a potential solution:
The following macro relies on the 3D Suite from @ThomasBoudier. So, you will need to update your Fiji and activate his update site (3D) .
I adapted the values for your example image, but they can surely be optimized further. After letting the macro run on your original stack, the 3D ROIs of your nuclei should be in the 3D Manager and you should receive a list of center points which is the overall nuclei count.
Sometimes it doesn’t add them, why so ever, then you can try to run it on the original ND2 image again or add the extracted objects with the Add Image function from the 3D manager. Check out the explanations on @ThomasBoudier’s page.
Once you got the ROIs in the 3D Manager you can switch to your original image and measure the intensity more accurately in your first 3 16-bit channels after quantitative background subtraction (see 4. to Question No.1) in the three dimensional space. I would consider this as most likely the most accurate outcome from images like yours.
Hope that helps and gives a little insight into the pitfalls.