My name is Timothy night I'm, an engineering director at Google for the Android camera team, okay, so first on the fundamentals, so image, quality, video quality, speed, shutter latency time to open, we double down on all those right, so everywhere better than last year. You know better photos better videos. On top of that, we added some new features, so we added OS for even crisper photos and more stable videos. Ok, the portrait mode. We added motion photos face retouching, I, think the experience is really evolved, so the technique we use is we capture a burst of photos, and then we combine them together in software it does make a really high dynamic range, high quality, final photograph, so with OS. Now every single frame in that burst is sharper and cleaner.
So the final result is even sharper and cleaner than been before and in video mode in video mode. So a problem that, if you don't have YES, is that if there's motion blur within a frame, you got a little of a wobbling. So jiggly look to the video, but by running optical stabilization within video recording ?, it's even smoother. We get rid of that at motion shake, and it's now, like a more stable, feel we're capturing much darker versions of the scene where the highlights are not blurred out. This guy is still blue, and then we do some very sophisticated noise reduction by combining frames together and then apply tone mapping to get to the final rendition, so you're able to preserve the highlights and the blue skies and also in the dark areas.
We see detail in the shadows, so they're actually two techniques. The first technique is machine learning, so by training a model on the million images like a lot of images, we're able to understand the foreground background segmentation, and you know on both the front and rear camera. You know. Do you know a really nice background blur? Additionally, for the rear camera, we have a special sense of Penelope, referred to as dual pixel, where every single pixel has both a left and a right off and conceptually Ales. You have two slightly different viewpoints of the scene as if you like, move your head a little left and right, and that is enough to give you actually a depth map of the scene and combining that, with the machine learning model, we can get an even more accurate portrait photo as well as photos of things that part people.
There are a lot of dual cameras on the market. You know they may not all be that great dual camera is a know. It brings a lot of trade-offs with it. You know, for example, takes more space and maybe the battery smaller, you often know the second camera is really not very good in low light and for us the light is super important because smaller pixels in a shallower, sorry a narrower aperture, but in the end I think that the know, image quality that the get quality the capabilities wanted to bring to the table were really like. I said a single camera experience and I believe that we met that goal.
You know best photos in the world's best food is in the world's fastest captured, saying FOSS is open time. It's hard to complain about that.
Source : CNET