What I’m after
I’m working on an embedded application running on a device with an LCD screen.
I want to run some integrations tests on this application where the tester verifies that the content on the screen (text and images) matches the expected output for various use cases.
This is running in a resource constrained environment so I don’t have the ability to take screen shots etc.
I thought I’d take the following route to run the test cases where the tester:
- Requests the UUT to display a particular screen
- Generates a reference image of what the screen should contain
- Takes a photograph of the display using a webcam
- Compares the photograph with the reference image
I threw a prototype together to see if this idea will work. The following are three images I generated to try it out:
- EXPECTED: the expected output - generated by writing text onto a photograph of a blank screen.
- PASS: an image that should pass the test
- FAIL: an image that should fail the test
I used the following script to do the image comparison:
import cv2 from skimage import metrics accept = cv2.imread("Pass.png") reject = cv2.imread("Fail.png") expected = cv2.imread("Expected.png") print(metrics.structural_similarity(accept, expected, multichannel=True)) print(metrics.structural_similarity(reject, expected, multichannel=True))
The output was as follows:
I think the similarity factors are too close for me to clearly determine whether the actual image matches the expected one or not.
Could suggest an algorithm that I can use to achieve what I’m trying to do?