Intersection of a bitmap image and a non regular pixel grid

Hi All,

I’m working on a LED project which involves printed circuit boards where the LEDs are placed in an “irregular way”. They’re not following a cartesian grid nor a polar grid. Its more like an equally spaced curve follower. I want to be able to display animations on those circuit board and for that I’m looking for a strategy to process the intersection between my pixel map and a bitmap image.
Here’s an example of a low res image with the led pixel map on top of it (left).

I’m looking for a strategy or algorithm that would allow me to process the RGB value of each pixel (leds) based on the surrounding colors (bitmap image).

I have a strategy but I’m stuck at some point. Each led falls inside a square of the bitmap image and is surrounded by 8 other pixels, all with their own colour. I want to combine the RGB value of those 9 pixels by weighting them based on the distance between the LED and the centre of each bitmap pixel. (On the right of the image above) Right now, I have a python script that generates, for each led pixel, an 3x3 array that contains the distances with the centers surrounding it. It handles the cases where the led pixel is on the border or in the corner of the bitmap image (where there’s 6 or 4 surrounding pixels).

My problem is that I have no idea what would be the right way to combine distances and RGB values to get a single colour. Linear, polynomial, logarithmic … My goal being to be as close as possible to the original image and have smooth fading colours.

Any idea or keywords to broaden my search would be really appreciated.

Thanks

One question that might not be obvious, if your bitmap image has a pixel of value (r,g,b) then that same (r,g,b) value give you that color on the LED ?
If yes well you dont need to calibrate at least…

Then if I understand well, the LED should have a color that correspond to a “smoothed” version of the 8-neighboring color + center color.

Well for image processing we use either a mean or a gaussian filter to achieve such smoothing.
Such that the LED value would be a weighted sum of the neighboring pixel values of the bitmap image, and all weights should sum up to 1 to keep in the initial display range.

In the simplest case of the mean filter the coefficient is just 1/9.
If you do a linear combination of your distances then you could replace the 1/9 with the coefficients dpx/dtotal with dpx the distance of your led to the center of the neighboring pixel px, and dtotal = dp1+dp2…dp9

Such that if the LED is right in the center you fall back into a ratio 1/9.
If you use a gaussian distribution, it gets a bit more complicated… but still doable.

To answer your first question, I may have to do some color calibration but I’ll handle that later in the whole process. For now, I’m assuming that it is consistent.

Your suggestion was really interesting, I went for an implementation of a 2d Gaussian function that can take the distance (combined x and y coordinates) and sigma as an argument. I’m normalizing the 9,6,or 4 coefficients and apply them to the RGB values. I works quite nicely and I can tweak the settings to decide how sharp I want to keep my image. Perfect.

1 Like