Alright. Sorry about the delay and thank you all for the suggestions; I was finally able to do exactly what I needed to do.
Here’s the scenario in case anybody’s still curious:
I’m servicing this old MRI scanner from early 90’s (actually a prototype, not the finished product) which utilizes a 12-bit grayscale LUT loaded from disk. Unfortunately the LUT got corrupted due to disk rot so it had to be reconstructed from the indications in the manual. While the LUT itself was stored in a 16-bit format because of byte size, the DAC can only handle these compressed 12-bit values which are then expanded to full 16-bit values much like a PC’s VGA output did with 6-bit color palettes back in the day. Correction (specifically dynamic range compression) of the output values was therefore required in order to avoid a variety of visual artifacts during display.
But why not correct with 8-bit values directly? wouldn’t that yield a practically identical result? Turns out the ROM actually goes through every single entry in the LUT upon startup, and anything outside a very, very tight range, causes the bootup sequence to fail altogether. It’s not not even a checksum, which’s what I figured at the beginning; it’s actually a very precise test that demands EXACT 16-bit values in this particular format.
Ultimately the missing piece of the puzzle was the calibration trick suggested by Michael Schmid, though I still had figure out the exact sequence of actions necessary for the changes to stick correctly, which thing was made a bit more complicated than necessary due to ImageJ auto-correcting said values in the background, which of course may be a desirable default for the type of work this beautiful piece of software was originally intended to perform, but it certainly almost had me.
Best regards.