At $399, the newly-launched iPhone SE is priced cheaper than the two-year-old iPhone XR, but it still features a faster processor. Although its camera hardware specs appear similar to that of the iPhone XR, the 2020 iPhone SE captures comparatively better images, especially portraits, and here’s its secret sauce.
Both the iPhone SE 2020 and the iPhone XR use a single 12MP rear-facing camera with F1.8 aperture and OIS. However, images captured using the former showcase better colors and wider dynamic range. Moreover, portrait images shot using the new iPhone SE has a much better depth-of-field effect, and its secret is the A13 Bionic chipset and its newer ISP (image signal processor).
The iPhone XR uses data from the 12MP camera sensor’s dual-pixel autofocus system (or what Apple calls ‘Focus Pixels‘) to capture depth information, but the second-generation iPhone SE uses an entirely software-based system to recreate depth data. This prompted developers of the Halide camera app to peek behind the curtain and take a deep dive into how it actually works.
The new iPhone SE uses a technique called ‘Single Image Monocular Depth Estimation,’ which uses the A13 Bionic’s ISP and Neural Engine. Developers of Halide found that the iPhone SE can take a picture of other pictures, and turn them into portraits by estimating depth data. An example of it is shown in Halide’s blog where the iPhone SE was able to turn a 50-year old slide film into a portrait image.
Apple allows the stock camera app on the 2020 iPhone SE to take portrait images of only people. That’s because the A13 Bionic chipset can create fairly accurate depth map estimates of humans, but it is not as effective on other subjects like pets and objects. The depth map of non-human subjects can still be accessed by third-party apps, making it possible for third-party apps to turn them into portraits.
The updated version of the Halide app takes that depth map from the iPhone SE’s camera system and turns them into portrait images. Hence, it can capture portrait images with a background blur effect on non-human subjects. The company’s blog uses images of a pet dog to capture its portrait image using the iPhone SE’s camera.
However, the depth data estimation starts to fail on complicated objects like flowers and trees, and in situations like when pets are laying down on the similarly-colored floor or sitting exactly in front of a tree. This is where Apple’s dual and triple-camera systems on the iPhone XS, iPhone 11, and the iPhone 11 Pro come out on top. They can use additional camera sensors to accurately estimate depth data of objects even in more complex scenes.
It is clear what powerful hardware and software can achieve together, and what Apple managed to do with the iPhone SE 2020’s single-camera system is nothing short of amazing. It uses a three-year-old camera sensor and takes the best out of it using the A13 Bionic chipset’s newer ISP and AI processing.
Even Google uses the Pixel 3A’s single-camera setup to capture portrait images, but it isn’t clear if it uses hardware, software, or a combination of both.[Source: Halide Blog]