Issue registering very offset images

Hello,
I am trying to use starfish to analyze RNAscope data that involves 3 rounds of imaging of DAPI+4 spot channel. Between rounds 1 and 2 the image positions drifted significantly so that only about 1/5 of the images from the “same position” overlap between imaging rounds 1 and 2. I can successfully register corresponding positions from round 1 and 2 using ImageJ or the RNAscope software but cannot get the starfish registration function to successfully register the images. Warp.run outputs images as if it was successful but they are clearly not actually registered across rounds. Is there another registration method or more parameters that I could change to facilitate this more difficult registration? Below is the code I have been using. Thanks!

#Define empty vector to put registered primary images into
prim_proj_regis = []

#Also make a vector of just R1 DAPI images
projected_z_stacks_nuc_R1 = []

#For each FOV:
    #Define a registration matrix based on registering DAPI images
    #Then apply that registration matrix to primary spot images
    #Then save those registered primary images to prim_proj_regis
    #DAPI of R1 is the reference DAPI image so dont need a registered version
    
#
for key in range(0,len(list_of_keys)):
    #Select FOV DAPI series
    FOV_nuclei = projected_z_stacks_nuc[key] #DAPI images from R1-3.
    
    #Choose R1 DAPI from that FOV
    nuclei_zproj_ref = FOV_nuclei.sel({Axes.ROUND: 0, Axes.CH: 0})
    projected_z_stacks_nuc_R1.append(nuclei_zproj_ref)
    
    #Define function to learn transformation of R1 to R2 and R3
    learn_translation = LearnTransform.Translation(reference_stack = nuclei_zproj_ref, axes=Axes.ROUND, upsampling = 1) 
    #Apply that transformation to the three z-projected DAPI images from each round of that FOV (input)
    transforms_list = learn_translation.run(stack = FOV_nuclei)
    
    #Then apply transformation to the primary data
    warp = ApplyTransform.Warp()
    FOV_prim = projected_z_stacks_prim[key]
    FOV_prim_registered = warp.run(stack = FOV_prim, transforms_list = transforms_list)
    prim_proj_regis.append(FOV_prim_registered)

First, if you haven’t already, please see image registration tutorial.

Without knowing how ImageJ or ACDBio registration works, my guess is that the offset is too large for the cross correlation algorithm to identify the optimal transform. You may have to register images prior to loading into starfish, or input offsets learned from ImageJ into the transforms_list and then ApplyTransform.Warp().

Before trying one of the two workarounds, can you share the output of print(transforms_list)? That should tell us whether the LearnTransform or ApplyTransform is the issue.

The output of print(transforms_list) is:

tile indices: {<Axes.ROUND: ‘r’>: 0, <Axes.ZPLANE: ‘z’>: 0, <Axes.CH: ‘c’>: 3}
translation: y=0.0, x=0.0, rotation: 0.0, scale: 1.0
tile indices: {<Axes.ROUND: ‘r’>: 1, <Axes.ZPLANE: ‘z’>: 0, <Axes.CH: ‘c’>: 3}
translation: y=342.0, x=43.0, rotation: 0.0, scale: 1.0

And this is an example image of the registration from diagnose_registration for position 4 post-registration. All you can see are the blood vessels but you can see that they are clearly not aligned:
Screen Shot 2020-08-06 at 5.51.24 PM

So it appears to come up with a transformation matrix but not an appropriate one?

So diagnose_registration output looks different before and after ApplyTransform.Warp()? Then yes, the problem is it is unable to learn the correct transformation matrix. And based on the image you shared, I think it’s because there are not enough matching features between the red and blue channels for a strong cross correlation. Maybe if you somehow enhanced the DAPI signal or background so the algorithm has more to work with?

FYI, I can tell based on the output of print(transforms_list) that you ran that command after Warp() because it has <Axes.ZPLANE> and <Axes.CH>. These are added due to a bug, which causes only the specified Axes.CH to transform when you run Warp() with the same transforms_list more than once. The workaround is to just relearn the transforms_list before every additional Warp().