With the Pixel 6 series, Google takes portrait mode of selfies to the next level


One of the strengths of Google’s smartphones, since the first generation of Pixel, is the photographic quality. Despite a hardware often inferior to the competition, the American giant has been able to extract the best from every single pixel available, making use of its experience in computational photography, a sector in which it is definitely at the forefront.

With the Pixel 6 series, which should soon also arrive in Italy, Google has taken it a step further with a new portrait mode dedicated to selfies which is able to recognize even single strands of hair, thus allowing to obtain results that are decidedly superior to what happened previously.

A new series of models

Emulating a large lens and sensor isn’t always easy, and you need a high-quality software model to bring about appreciable improvements. That’s why Google went back to work for create a new set of templateswhich could improve the recognition of the smallest details, with the help of Tensor’s performance.

In order to correctly instruct a mathematical model, it is necessary to create a set of data at the height of the situation, with shots from all angles and with different lighting sources, in order to generate a more accurate mask than in the past. Thus, the sphere used with Pixel 5 was dusted off, consisting of hundreds of LEDs, depth sensors and cameras, so as to capture a large amount of samples with a perfect mask, precisely separating the subject from the background.

Having said that it would seem (almost) simple but in reality they are served other steps before getting to the magic of computational photography. With the shots obtained, several sets of photographs were made, changing the lighting to adapt to the real scene thanks to the depth of field data, ray tracing and a simulation of optical distortion, to obtain a realistic result.

Thousands of photos were then taken in “real” settings, with a precise model that took care of extracting the relative masks and with a visual inspection to use only the highest quality samples. The two sets obtained were then fed to the systems of machine learning to adequately instruct the model and make it capable of recognizing a wide range of scenarios, poses and people.

Low and high resolution masks

At this point we might think that the game is done but there are other steps to get a selfie of excellent quality. If most smartphones capture the image and apply a background mask in order to blur it, the Google Pixel 6 and Google Pixel 6 Pro do a lot more.

Both the photo and the initial mask, decidedly coarse, are passed to the model instructed previously, which generates a more defined mask but with a low resolution. At this point the model performs an operation of upsampling to raise the resolution, based on the original photo and the first mask. The end result is a high resolution mask with a much higher qualityto be applied to the image to preserve the subject by blurring the background.

Amazing results

So here it is how more accurate selfies are bornwith a convincing bokeh effect, although not yet as optimal as that of a traditional camera,

One of the strengths of Google’s smartphones, since the first generation of Pixel, is the photographic quality. Despite a hardware often inferior to the competition, the American giant has been able to extract the best from every single pixel available, making use of its experience in computational photography, a sector in which it is definitely at the forefront.

With the Pixel 6 series, which should soon also arrive in Italy, Google has taken it a step further with a new portrait mode dedicated to selfies which is able to recognize even single strands of hair, thus allowing to obtain results that are decidedly superior to what happened previously.

A new series of models

Emulating a large lens and sensor isn’t always easy, and you need a high-quality software model to bring about appreciable improvements. That’s why Google went back to work for create a new set of templateswhich could improve the recognition of the smallest details, with the help of Tensor’s performance.

In order to correctly instruct a mathematical model, it is necessary to create a set of data at the height of the situation, with shots from all angles and with different lighting sources, in order to generate a more accurate mask than in the past. Thus, the sphere used with Pixel 5 was dusted off, consisting of hundreds of LEDs, depth sensors and cameras, so as to capture a large amount of samples with a perfect mask, precisely separating the subject from the background.

Having said that it would seem (almost) simple but in reality they are served other steps before getting to the magic of computational photography. With the shots obtained, several sets of photographs were made, changing the lighting to adapt to the real scene thanks to the depth of field data, ray tracing and a simulation of optical distortion, to obtain a realistic result.

Thousands of photos were then taken in “real” settings, with a precise model that took care of extracting the relative masks and with a visual inspection to use only the highest quality samples. The two sets obtained were then fed to the systems of machine learning to adequately instruct the model and make it capable of recognizing a wide range of scenarios, poses and people.

Low and high resolution masks

At this point we might think that the game is done but there are other steps to get a selfie of excellent quality. If most smartphones capture the image and apply a background mask in order to blur it, the Google Pixel 6 and Google Pixel 6 Pro do a lot more.

Both the photo and the initial mask, decidedly coarse, are passed to the model instructed previously, which generates a more defined mask but with a low resolution. At this point the model performs an operation of upsampling to raise the resolution, based on the original photo and the first mask. The end result is a high resolution mask with a much higher qualityto be applied to the image to preserve the subject by blurring the background.

Amazing results

So here it is how more accurate selfies are bornwith a convincing bokeh effect, although not yet as optimal as that of a traditional camera. but with greater attention to even the smallest details. The greater precision of the mask allows you to blur even the small details in the curls of the hair, as you can see from the sample below, certainly not easy to manage.

The new model developed by Google also allows you to optimally manage the different types of skin and hairstyles, ensuring more accurate and realistic results for anyone, regardless of skin color or hair. There is still room for improvement but for sure an important step has been taken in the right direction, making the smartphone camera more and more useful.

After all, it is an object that we almost always have in our pockets and we always expect it to be ready to grasp the essence of what we see.

You might be interested in: Google Pixel 6 Pro review

but with greater attention to even the smallest details. The greater precision of the mask allows you to blur even the small details in the curls of the hair, as you can see from the sample below, certainly not easy to manage.

The new model developed by Google also allows you to optimally manage the different types of skin and hairstyles, ensuring more accurate and realistic results for anyone, regardless of skin color or hair. There is still room for improvement but for sure an important step has been taken in the right direction, making the smartphone camera more and more useful.

After all, it is an object that we almost always have in our pockets and we always expect it to be ready to grasp the essence of what we see.

You might be interested in: Google Pixel 6 Pro review

Leave a Comment