The problem

As already mentioned, a modern digital camera usually determines a lot of parameters by itself, based on a few measurements of the scene to be photographed. Slight changes in that scene can result in quite different images, more so than expected from those changes only. Even in full manual mode, a photographer will have a real hard time to obtain 2 identical images of an almost unchanged scene!

This is not the cameras fault as it has to estimate a lot of parameters from the content of the scene using a limited set of measurement areas and with very little a priori knowledge about that scene, and no knowledge whatsoever about the intentions of the photographer. For instance, the camera will try to estimate the color of the lighting based on a white or gray area, and adjust the colors gains in order to achieve a more or less constant response (this is called white-balancing, and is similar to what the human visual system does when confronted with different color lighting). This is demonstrated in the images below for 2 different cameras. Notice the big difference in white-balancing (or lack of it) between the Nikon and Canon cameras!

Fig. 3: Three images taken in full automatic mode (exposure, aperture, white balance), illuminants from left to right: CIE D65 (typical outside lighting), TL84 (fluorescent tube) and CIE A (tungsten lighting). Top row camera is a Nikon D200, bottom row is a Canond 10D.

Similarly, the computation of proper exposure times and aperture depends on the luminance of the entire or a part of the scene. Again, the results of these estimates can strongly affect the resulting picture. For instance, a common problem with most cameras we came accross is a tendency to slightly overexpose, especially with more colored lighting, resulting in saturation of one or more channels of some patches (typically the 'stronger' ones like white, red, orange, yellow). When this happens data is irrevocably lost ...

By now it should be clear that reproducible imaging using digital cameras is almost impossible, and that every image is defined in it's own RGB color space, defined by the scene conditions and camera settings. Unless this problem is addressed it is absolutely pointless to exchange, compare or measure colors on such images for any serious work.

Moreover, the display of such images may lack realism. Indeed, most modern display devices (read: monitors) adhere more or less to the 'sRGB' standard (or can be setup to do so), the de facto standard for display on the web. Many modern cameras also have an 'sRGB' setting, but again how well this works in creating web-ready images varies depending on some or all of the camera estimations and guesses mentioned in previous paragraphs. Obviously, if the RGB color space of the image is very different of the sRGB color space, display will be poor from the point of view of realism.

Next: Our solution.