Face-detection software is primed with a set of mathematical rules to describe the landmarks that identify a human face: typically two eyes, eyebrows, nose and lips. The image is first ‘downsampled’ by the software – to reduce the amount of information – and then it is analysed.
By measuring the differences between the shadows created by facial features, a camera can identify whether or not they match the expected layout of a face. Using the differences in contrast and shadow created by the whiteness of eyes and teeth, a camera is able to tell when a subject is blinking or smiling and can alter its settings accordingly, if so programmed.
Newer cameras are beginning to incorporate face recognition too. By taking a series of pictures of a person from several angles, the clever software is able to store information about the spacing of their unique facial features. This can then be used to give them autofocus priority when taking pictures in crowds, as well as to automatically tag the photographs.