By switching on the Pixel Visual Core co-processor, third party apps installed on the new Pixel devices will be able to take advantage of the Pixel's AI enhanced camera results.
One of the Pixel and Pixel 2 range's best features is the camera. Enabled by Google's machine learning smarts, the camera can take instantly great photos just by simply pressing the shutter button. It also produces among the best depth effects we've seen in any portrait mode, and with just one camera on the back.
By switching on the custom designed Pixel Visual Core co-processor in the Pixel 2, this same machine learning capability will be available to other apps on the phone. You're no longer restricted to using the Pixel's dedicated camera app. What that means is that you'll no longer have to take the photo first in the camera app and then upload it to Instagram to get your Insta pics looking as good as they possibly can.
While the most impressive and most easily distinguishable advantage to the Pixel Visual Core power will be the awesome HDR+ results, there are real benefits in actually shooting the images through your favourite apps too.
Pixel 2's camera, for instance, uses something called RAISR, which makes zoomed-in shots look sharper and more detailed than previously possible without an optical zoom. You also get zero shutter lag, so the picture is captured virtually as soon as you press that shutter button.
While Instagram, Snapchat and WhatsApp are the only apps mentioned in the announcement, Google has stated that any app developer is capable of making use of this technology. It's open source, so expect to see even more apps with the performance of Google's own Pixel camera app come along in the near future.
The great news for Pixel 2 and 2 XL users is that all you need to enable this is to wait for the next software update to arrive on your phone. Google's February software is starting its rollout to customers now and, once installed, will automatically activate the Pixel Visual Core.