Your Google Pixel 2 camera just got even better – here's why

Google has switched on the Visual Core chipset in its Pixel 2 handsets, and it makes your social photos look oh so much better

Ever noticed your Android phone takes worse photos in social apps like WhatsApp than in the camera app? It’s because they don’t use the same shooting mode and post-processing.

And no matter how good a camera’s hardware is, software and processing is at the heart of every great phone camera.

The Pixel 2 and Pixel 2 XL have two of the best phone cameras in the world. However, until 8 February 2018, the software that caused many to label the Pixel 2 XL camera the best around was not used in third-party apps such as Instagram.

Google’s Visual Core is behind the change. It’s a custom chipset that enables advanced HDR+ and RAISR processing outside of the Google Camera app. Snapchat, WhatsApp, Instagram and Facebook are the first apps to make use of Visual Core, which until now has sat dormant in the phones, waiting to be used. Others only need to support certain protocols, called APIs, to get on-board.

What is Visual Core?

Speed and efficiency are the main benefits of the Visual Core chipset, which uses eight cores to execute up to three trillion operations per second. A custom design lets it optimise photos more quickly and efficiently than the phone’s CPU, or even the image signal processor that handles photo processing in the Pixel 2’s dedicated camera app.

This is all the more important with third-party apps, as they need the photo now, where camera apps tend to continue photo processing in the background as you take more shots.

“[Visual Core] gives us the ability to run five times faster than anything else in existence, while consuming about 1/10th of the energy of the battery. We can put it under the hood,” Visual Core engineering manager Ofer Shacham told WIRED.com.

To understand why you should care, we need to look a little more into what HDR+ does.

Read more: These are the best smartphones for any budget in 2021

HDR+

Your average camera shoots a single image when the shutter button is pressed. If shooting in Auto mode, the camera will choose the exposure settings that best suit the scene. Any highlights that are beyond the capabilities of the sensor’s native dynamic range will become flat blocks of white, clipped out of existence. Unlike shadow detail, that image information is gone. You can’t dig it out in Photoshop.

This is why HDR+ uses a burst of up to 10 underexposed shots each time a photo is taken. “We take them all and chop them into little bits, and line them on top of one another, and average the image together,” Pixel Camera project manager Isaac Reynolds told Wired. The result is much better shadow detail, fewer (if any) clipped highlights, less noise and better colour.

The traditional approach to HDR is to use three shots at different exposure settings, and then merge them together. HDR+ is a rethinking of how to approach dynamic range optimisation in phones. It has been around since 2014.

In its early days, with the Nexus 5 and Nexus 6, HDR+ could look unrealistic during daylight, but today makes iPhone X and Galaxy S8 owners jealous.

At times the Pixel 2 can get eerily close to the dynamic range of a compact system camera or DSLR with a sensor roughly 10 times the size. And now your Instagram posts get the benefit.

RAISR

RAISR, which stands for Rapid and Accurate Image Super-Resolution, is another Visual Core feat. Like HDR+ it is designed to solve a problem with mobile phone cameras. This time, digital zoom.

When you zoom into a scene, standard procedure is to blow up the image to the same size as a 1x zoom photo. An upscaler algorithm both “smudges” and sharpens the image to make up for the missing pixels, as when you zoom the camera only uses a fraction of the sensor’s information.

The iPhone X uses a more traditional approach, with a genuine 2x secondary camera on the back. Google’s RAISR instead uses software techniques that might be compared to those of Prisma, an image editing app that blew up in 2016. Prisma uses machine learning to make your photos looks like works of art.

As Google detailed in a 2016 Research blog post, RAISR recognises patterns at pixel level and uses a huge database of filters to fill in the detail the camera sensor lacks the resolution to revolve itself. An exercise of pure computational photography, the Pixel 2 re-paints a more detailed version of the image using machine learning. Google’s own demos show it works extremely well on predictable elements such as hairs and wrinkles.

RAISR does not work quite as well with more complicated fine details like a tight-knit pattern of leaves on far-off trees. But it is, nevertheless, impressive. Visual Core lets a Pixel 2 apply this processing to WhatsApp images almost instantly.

Computational Photography: AI and NPU

This is not the first time we’ve seen dedicated silicon used for a phone’s camera in this way, though. Apple’s iPhone X uses a chip to speed up facial recognition of Face ID, performing “600 billion operations per second”, to Visual Core’s three trillion.

Read more: The best Android apps

The Huawei P10 and P10 Plus also have a comparable co-processor, which Huawei calls a Neural Processing Unit (NPU). Huawei sells this in with what is in danger of becoming the most overused term of the moment, “AI”. The NPU is used to monitor the camera sensor’s visual feed in real time, to apply scene modes in the camera app. This is nothing new, of course, something Sony’s Xperia phones have done for a long time. It can just perform the task faster. The NPU also attempts a limited version of Google’s RAISR, recognising and sharpening text when zoom is used.

Where is this all going?

As with pressure-sensitive phone screens, Huawei got the tech out of the door early, if not necessarily in the best shape. But will this trend largely disappear like pressure-sensitive displays, or is the beginning of something even more important?

Google engineering manager Ofer Shacham hinted there are further plans for Visual Core, if only in the most vague way. Its ability to perform visual analysis at ultra-high speeds points in one obvious direction: AR.

The day after Google announced its imminent “switch on” of Visual Core, the Google Research blog posted an article about the instant motion tracking of its Motion Stills AR feature. This let you place 3D objects in the camera view, and they remain in the environment as you move the camera.

Was the blog’s timing pure coincidence? Probably, but wider adoption of neural processing or AI chipsets could dramatically reduce the CPU load of augmented reality apps. Not to mention how warm they tend to make a “normal” phone after just a couple of minutes’ use.

Rumours suggest the Samsung Galaxy S9 will have an AI chip or NPU — choose your own buzzword — and this would cement tech as a norm for high-end phones, rather than a manufacturer-specific curiosity. We’ll likely find out by February’s end.

This article was originally published by WIRED UK