Why Apple worries about photography on the iPhone

Apple’s decision to invest in iPhone photography was incredibly shrewd. Smartphone cameras will deliver better quality images than you get from DSLRs within three years.

“We expect that still images will exceed the image quality of single-lens reflex cameras within the next few years,” said Terushi Shimizu, President and CEO of Sony Semiconductor Solutions (SSS). This statement comes as pro photographers (and video makers) make increasing use of iPhones for professional work — but also as machine vision intelligence reaches a tipping point to enable enterprise and industrial applications.

Throughout the history of the iPhone, Apple has focused on use of the device as a camera. It has also built out a wider ecosystem to support this use. Think about the Photos app, built-in document scanning, AI-driven person recognition, machine-driven text identification and translation, and, most recently, the capacity to identify images of flowers and animals using Visual Lookup. Apple’s efforts arguably accelerated with the iPhone 7 Plus, which included multiple lenses and zoom functionality for the first time in a smartphone.

Watch your image

Sony, which holds 42% of the global image sensor market for phones and has three of its highest-end IMX sensors inside the iPhone 13, believes sensors in high-end devices will double in size by 2024.

This will enable “new imaging experiences,” Sony said. These will inevitably include zoom effects boosted by AI and Super HDR and should also extend to 8K video capture on smartphones. I think they will also extend to true 3D imaging capture sufficient to support truly immersive 3D experiences. These predictions make it clear that Apple’s Cinematic Mode is a stalking horse from which to exploit the future evolutions of smartphone camera sensors.

But this kind of machine vision intelligence is just the consumer front end to far more complex operations that should translate into interesting enterprise opportunities. I’ve written before about Triton Sponge, which enables surgeons to more accurately track patient blood loss during surgery, and I think everyone now understands how camera intelligence combined with augmented reality can optimize performance across distribution, warehousing, and logistics chains. Industry is also embracing smartphone-quality imaging intelligence — factories make use of fault detection systems mounted on iPads and other devices to monitor production to maintain quality control, for example, and AR-based retail experiences continue to improve.

But what we know about today’s use cases must be considered alongside the consequences of those only now coming onstream, particularly around autonomy and augmented reality. We think Apple is working towards eventual introduction of its take on AR glasses, which may be equipped with an array of as many as eight cameras — possibly due early next year and quite probably from Sony, which I believe has worked with Apple on and off on these projects for years. A highly interesting thread from Robert Scoble makes a host of predictions concerning these plans, but those cameras will likely be used to analyze and augment the reality you are in, as well as deliver virtual experiences you can safely explore.

A human wearing a set of AR glasses will rely on a similar set of technologies as a vehicle making use of machine vision intelligence to drive itself on the public highway. Accurate image sensors combined with the kinds of AI we already use daily in iPhones will be fundamental to the development of the Apple Car. Recently announced, Door Detection is a fantastic illustration of how AI and vision can work together to help a human understand and navigate the world.

Similar combinations of tech (boosted by supporting technologies such as UWB and LiDAR) will be used to help build vehicles to autonomously comprehend and navigate roads. In health, we can easily predict that as smartphone cameras improve, it will become more possible to support implementations in remote patient care and semi-autonomous automated surgery.

Of course, as such imaging-based use cases proliferate it is reasonable to anticipate accelerated innovation in the CMOS sensor development industry, which is clearly what Sony is banking on. The impact? Eventually the camera you wear on your glasses will be capable of capturing photographs equally as good as those you catch today using a DSLR — probably faster than you can say “Hey Siri” and boosted by the same processor you use in your Mac.

Please follow me on Twitter, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

Copyright © 2022 IDG Communications, Inc.



Source