A patent filed by Apple outlines a unique way of using facial recognition that’s energy-efficient and couples with a multi-user environment for devices like the iPad.
The patent outlines a very interesting way to implement a low-energy version of facial recognition alongside presence detection.
The problem with facial recognition is two-fold. First, analyzing images is computationally heavy work, which can hit a portable device’s battery hard. Secondly, for more reliable recognition, controlled lighting situations are necessary, which doesn’t work with portable devices since they’re used inside and outside. Apple is trying to solve both these problem with the patent.
How do they work around these problems? Well, possibly through focusing on a “high information” part of the face and picking out a few different and important features.
In some cases, the high information portion includes eyes and a mouth. In some other cases, the high information portion further includes a tip of a nose. Processing the captured image could include detecting a face within the captured image by identifying the eyes in an upper one third of the captured image and the mouth in the lower third of the captured image.
[…] Additionally, processing the captured image could further include vertically scaling a distance between an eyes-line and the mouth of the detected face to equal a corresponding distance for the face in the reference image in order to obtain the normalized image of the detected face.
By narrowing the relevant focus for criteria, that cuts down on the computational issues. Then, focusing on specific points as criteria to compare to the reference photo (captured when setting up facial recognition) will further make the process more efficient than traditional feature correlation matching.
Because this implementation is energy-efficient, this could be active all the time. Feasibly, you could bring your iPhone out of stand-by, point the camera at you and it would automatically unlock, not unlike how it works on Android. The difference lies in the computationally inexpensive method used.
Apple also outlines how this could be used in conjunction with a previous patent they filed for presence detection. Presence detection uses various types of radiation emission – such as sound, light, or radio waves – to detect the presence and distance of someone nearby similarly to how sonar/radar works. This could automatically activate the component that works with facial recognition, and then unlock your device and prepare it for you.
Another big feature of this patent is that facial recognition could be used to identify one of many users that are configured with an iOS device. It could then load that user’s personal configuration.
For example, to comply with such personalized configurations, the iOS device could modify screen saver slide shows or other appliance non-security preferences.
An iPad could be configured to work with a family, and each person’s settings – such as wallpaper, application data, and so forth – could be brought up upon unlocking.
Lastly, the patent looks at the fact that Apple’s implementation could tap into the GPU to minimize computational overhead, and that it could potentially be used to see how attentive a person is to the device. This last piece would probably be used to prevent photos from successfully unlocking the device (like what happens with Android devices). The patent explicitly mentions that it would work with MacBook (and desktop Mac OS X) as well as iOS devices like iPhone, iPod Touch, and iPad.