From the beginning smartphone photography function It is just a supplement that has been added to the aftermath of the evolutionary world, which is not a high quality camera. But after technological advances have evolved into a jump. The function is so powerful that it becomes one of the brand's outlets. Used in their products. Do today Every brand has a similar default camera standard.
Today's Showdown is enough to explain 3 techniques why. "Smartphones remove the face after blurring." If you're ready to go.
1. Simulate human exposure in double-lens form.
All planet cameras simulate a human-like 3D image. If you explain it briefly. When we use the same space The objects close to the eyes are noticeable and sharp. The object that is far from our eyes is an object that is covered. If you compare images from our camera, it is clear that your face is blurred.
Sensor size for all types of cameras. You can not compare the human view entirely. And especially the smartphone's camera sensor is relatively small, but with a broad perspective. The resulting image will be a picture of all subjects that are in a focused image or have a shallow depth of field.
Many brands (Apple, Huawei, Vivo, Oppo, etc.) So a smartphone can shoot a beautiful face just for a pro camera. By changing the human exposure simulation formatdouble Lens (Today there are 4 lenses.The smartphone also has the same size sensor. The cameras work differently, and the lenses use different distances.
The first camera uses a wide-angle lens with a low focal length to capture the front-panel HD, while the other camera uses a long-range long-range lenses for recording. Background information. Shadows, backgrounds, etc. When both cameras are ready, they are processed and integrated into the same image through the software.
butconsThe lens. The background image of the person is soluble and blurred in large amounts when it melts backstage camera with a telephoto lens. And if one lens is a telephoto lens, it must use it with a lens. It only weakens the duplication of two lenses.
2. TrueDepth Technology (Infrared Sensor)
TrueDepth technology uses over 30,000 infrared light to illuminate areas. Our faces are depths or proportions. And scanned in three dimensions to allow the face alone to lock the machine.
But in fact, technology Apple has also added shooting to individuals as well. This is an excellent way to navigate through this technique by identifying a shallow depth or facial shape and separating the face of the model from the background.
butconsUsing TrueDepth technology to shoot people. If you take the place. Strong sunshine. The background degradation quality is distorted because the infrared light on the front of the camera is exposed to sunlight due to similar photographs.
3. Using the Artificial Intelligence feature with Dual Pixel Autofocus
Google Pixel 2 does not use dual lenses. But choose Dual Pixel Autofocus (Most smart phones already.Technology that makes focus quick and accurate by dividing the two light outputs on the processor chip and applying the difference on both sides to calculate the right point to the next focus point. Combined with artificial intelligence distinguishes a person from the background in many different ways. Marking of human skin color, identification of human body skeletons, etc.
Latest Apple smartphones. iPhone XR The technique used to describe people like Google Pixel 2, but to artificially focus on artificial intelligence called Portrait Effects Matte (PEM) Find a person in the body at the right time for 2D color photos and 3D depth descriptor training, where the software handles the image from which the image is. A person's body, including objects.Hair, glasses) Do not blur anymore.
Source: petapixel, blog.halide, AI.googleblog