It's worth to know that this illusion works because the checkerboard image, as you may see it
on your laptop, casts on your retina with dimensions that cause the retina local adaptation to take
into account both the two squares at the same time.
The foveal vision area is something like one inch at one meter (and because your eye moves
continuously, with the so called "saccades", your brain is able to reconstruct the entire
color scene in real time). This means that one single letter, either A or B, can hit
your fovea at any time.
The point is that, even if you can't see both letters at the same time in a single eye fixation,
when looking at one letter your fovea also takes into account light information from what is around it.
This means that the fovea actually perceives also the neighboring cells.
The net effect is that when looking at one area, your eye locally adapts to luminance, filters noise,
enforces contours, etc. considering what *surrounds* this area, and this makes the illusion work. We
say that *the retina works in a "center surround" manner*.
So, the "A" cell being surrounded by lighter cells can be perceived darker. As a comparison, cell "B" 's
neighborhood is darker and the cell "B" is then perceived lighter.
Finally, since shadow edges are soft, retina eliminates this information. Then shadows do not disrupt the overall chessboard observation making possible to "confidently being fooled" by the perceived cells luminance.
Reproducing the illusion
------------------------
The bioinspired module does mimic (also) the parvocellular retina process, that is our foveal
vision, and it does reproduce our eyes' local adaptation.
This means we can expect the parvo channel output to really contain luminance values
similar to those we perceive with our eyes. Specifically, in this case we expect the "B" square
RGB values to be actually lighter than the "A" ones.
To correctly mimic what our eye does we need opencv to do the local adaptation on the right
image portion. This means we have to ensure that the opencv's notion of "local" does match with our
image's dimensions, otherwise the local adaptation wouldn't work as expected.
For this reason we may have to adjust the **hcellsSpatialConstant** parameter (that technically
specifies the low spatial cut frequency, or slow luminance changes sensitivity) depending by
the image resolution.
For the image in this tutorial, the default retina parameters should be fine.
In order to feed the image to the bioinspired module, you can use either your own code or
the *example_bioinspired_retinaDemo* example that comes with the bioinspired module.