Deep Fusion uses machine learning to process images, optimize textures, reduce noise and Improving the overall quality of pictures. At the exit, the user receives the highest quality photos with high detail. The function works in the background.
How Deep Fusion works
Before you press the shutter button, the camera has already managed to capture four photos at high speed, then 4 more pictures are taken. Then the neural network combines the best frames of the received images.
Apple claims that the technology is very different from Smart HDR. Deep Fusion selects a short exposure image with maximum detail and combines it with long exposure images.
Images go through four stages of processing, pixel by pixel, each of which is designed to increase the number of details.
The Verge says that the new feature will work in the upcoming beta of iOS 13. Probably, we are talking about iOS 13.2, which will be released today. [9to5]