All Pixel 4 camera features - as explained by Google [4K] By AndroGuider

By AndroGuider
Aug 14, 2021
0 Comments
All Pixel 4 camera features - as explained by Google [4K]

For, the past three years, pixel set the standard for smartphone cameras with incredible capabilities like HDR+ super resume top shot and, of course, nitrite with pixel for we're raising that bar yet again, and it all starts with this little square. Basically, a miniaturized camera rig right on the back of your phone. You can see the rear, wide and telephoto cameras, a hyperspectral sensor, a mic for your videos and Instagram stories and a flash that we hope you'll use mostly as a flashlight, but it's there just in case, but the hard work isn't what makes our camera so much better. The special sauce that makes our pixel camera unique is our computational photography and who better to talk about it than professor Mark Lavoie from Google research thanks Sabrina, it's great to be here. There's a saying among photographers that what's important to taking a great picture is in order subject: lighting lens and the camera body, it's their way of saying that it doesn't matter which SLR body you use unless you get the first three elements right. Well, here's a slightly different take on this list.

Subject: lighting lens software so by software I mean computational photography. So what does that mean? It means doing less with hard-wired circuitry and more with code I like to call it a software-defined camera. It typically means capturing and combining multiple pictures to make a single, better picture. One version of this is HDR, plus the technology we've used for taking photos on every pixel phone. When you tap the shutter button, we capture a burst of up to nine pictures.

These pictures are deliberately underexposed to avoid blowing out highlights we align them using software and average them which reduces noise in the shadows. This lets us brighten the shadows, giving you detail in both the highlights and the shadows. In fact, there's a simple formula noise goes down. Is the square root of the number of images you averaged together. So if you use nine images, you get 1/3 as much noise.

This isn't mad science, it's just simple physics by the way on the left is our raw output. If you enable that in the app there's something else about this list, it says the lens is important without quibbling about the order on the list. Some subjects are farther away than you'd like, so it does help telephoto shots to have a telephoto lens, so pixel 4 has a roughly 2x telephoto lens, plus our super resume technology, in other words a hybrid of optical and digital zoom, which we use on both the main and telephoto lenses. So you get sharp imagery throughout the zoom range. Here's an example.

You probably think this is a 1x photo. It's not it's a zoom taken from way back here by the way super resume is real, multiframe super-resolution, meaning that pinch zooming before you take the shot, gives you a sharper photo than cropping afterwards. So don't crop like this compose the shot you want by pinch zooming, also, by the way most popular SLR lenses do magnify scenes, not shrink them. So, while wide-angle can be fun, we think telephoto is more important. So what new computational photography features? Are we launching with pixel for four of them? First Live HDR+? Everyone here is familiar with HDR pluses signature, look and its ability to capture extreme brights and dark in a way that looks crisp and natural, but even phones with good HDR solutions can't compute them in real time, so the viewfinder often looks different from the final image.

In this example, the view find the window is blown out on the viewfinder, which might tempt you into fiddling with the exposure this year we're using machine learning to approximate HDR+ in the viewfinder. So you get our signature. Look while you compose your shot, we call this feature live HDR+, so the industry's most successful HDR solution is now real time and WYSIWYG. What you see is what you get now. If we have an intrinsically HDR camera, we should have HDR controls for it.

So pixel four has dual exposure controls. Here's an example: this is a nice HDR+ shot, but maybe you would like to try it as a silhouette, so you tap on the screen and lower the brightness slider a bit that mainly changes the capture exposure. Then you lower the shadows, slider a lot that mainly changes the tone, mapping and voil?. You get a different artistic vision by doing that with any other cell phone, so separate sliders for brightness and shadows. While you compose your shot, it's a different way of thinking about controlling exposure in a camera.

Second, white balancing and photography is a hard problem. Mathematicians call it an ill-posed problem. Is this snow blue the way this SLR originally captured it, or is it white snow illuminated by a blue sky? We know that snow is white with enough training. So can the camera we've been using learning-based white balancing in night sights since pixel three in pixel, four we're using it in all photo modes, so you get truer colors, especially in tricky lighting. Here's a tough case, an ice cave, it's blue light, but not a blue person and here's what it looks like with pixel force white balances.

Third: we've continued to improve portrait mode with our dual pixel or split pixel technology. We've always been good at portraits and at macro shots this year were computing depth again using machine learning from both dual pixels and dual cameras, which gives us accurate depth farther from the camera. This extends portrait mode to large objects and stand further back portraits. We also have a luscious new SLR like both. That's the shape of the blur look at the lights on either side of her head, we're doing better on hair and ROG fur, which are hard and, of course, we still do great, selfie, portraits.

Fourth and last we have continued to improve nitrite in many ways and extended it to a use case that has always been sort of a holy grail. For me, you could have taken this dusk shot using pixel three last year using pixel four. You can take this nighttime picture from the same viewpoint. In the year since we launched it, nitrite has been called everything from fake to sorcery. Well, it's neither think back to the mathematics that I explained at the beginning.

Macrophotography is about taking longer exposures and more of them up to 16 seconds times.15 exposures, that's four minutes, but it's a single shutter press, and it's fully automatic by the way. You can't do this with a single long exposure in four minutes. The stars do move and trees wave in the wind, so you need robust alignment and merging of multiple pictures and for a four-minute exposure. We do recommend a tripod, or you can prop your phone on a rock. Is there machine learning? Yes, we use it for white balancing, as I mentioned.

We also use semantic segmentation in all our photo modes and have for years to brighten faces in HDR plus a feature. We call synthetic fill flash to separate foregrounds from backgrounds in portrait shots and to darken and Denise skies in night sight. Is there computational, photography, there's lots of that to digital sensors are prone to hot pixels that are stuck at red, green or blue. The longer the exposure, the more hot pixels our exposures are pretty long, so we need some clever algorithms to remove those hot pixels by the way. That's our macrophotography field, testing team and yes, they sat still for a long time for this shot.

So where does this game stop? What can't we capture using pixel for? Well, we can capture the moon which, by the way, required some fiddling with this dual exposure. Controls I told you about, and we can capture a moonlit landscape. This is not daytime. It's the middle of the night and the landscape is illuminated only by the moon, see the stars, but what we can't do, including on pixel for today, is capture both at once in the same picture. The problem here is that the moon is blown out and the Marin Headlands at the bottom are just a silhouette.

The dynamic range, the difference in brightness between a full moon and a moonlit landscape is Tea neck. Stops. That's 19 doubling about half a million times brighter way beyond the range of any consumer camera, even an SLR. So is this scene forever impossible with a cell phone? Remember what I said at the beginning about software-defined camera pixel is committed to making its cameras better with software updates, so stay tuned on this one. To sum up for new computational photography features live HDR+ with dual exposure, controls, learning-based white, balancing wider range portrait mode with an SLR, both and night sight with macrophotography.

Oh, and remember you can use night sight for many things besides stars many things so go out there and be creative with pixel for.


Source : AndroGuider

Phones In This Article


Related Articles

Comments are disabled

Our Newsletter

Phasellus eleifend sapien felis, at sollicitudin arcu semper mattis. Mauris quis mi quis ipsum tristique lobortis. Nulla vitae est blandit rutrum.
Menu