The basic assumption of many works on
computational color and computer vision is that color information is, or can
be, a precise, easy to obtain, sampling of the scene reflectances and light
distribution. To this aim, the classic approach is to account for the device
color space, profiling the device used. This is the good.
It is well-known these computations can be complex
and not precise (3x3 matrix simplification, gamut mapping errors, etc..). This
is the bad, but we can cope with this, since so much research has been done in
the last 80 and more years on colorimetry, so to have more and more precise
The ugly starts when other acquisition
issues, often not taken into account, are considered.
This talk presents some “hidden” issues
about color acquisition from real scenes that can introduce severe errors in
the color information, together with some motivations for which, in the major
part of the cases, they have not been regarded as serious problems to take into
account. But, maybe, it is time to start considering them.
paper presents a unified approach for the relative pose estimation of a
spectral camera - 3D Lidar pair without the use of anyspecial
calibration pattern or explicit point correspondence. The method works
without specific setup and calibration targets, using only a pair of
2D-3D data. Pose estimation is formulated as a 2D-3D nonlinear shape
registration task which is solved without point correspondences or
complex similarity metrics. The registration is then traced back to the
solution of a non-linear system of equations which directly provides
the calibration parameters between the bases of the two sensors.
method has been extended both for perspective and omnidirectional
central cameras and was tested on a large set of synthetic lidar camera
image pairs as well as on real data acquired in outdoor environment.
Representation of Cultural Objects by Image Sets with Directional Illumination"