For years iPhone camera resolution has been 3,024 × 4,032 = 12 megapixels. Last year the iPhone 14 Pro Main Camera (previously called Wide) was upgraded to 6,048 × 8,064 = 48MP, but actually seeing all those pixels required RAW mode, and those images could be 80 megabytes each. This made everything slow and awkward, so this year the Main Camera used by the iPhone 15 and iPhone 15 Pro is still 48MP, but Apple’s preferred output format is now a 24MP non-RAW image (HEIF or JPEG).
But since the 24MP image size announcement 2023/09/12, I have been shaking my head trying to figure out how Apple generates 24MP images from a 48MP sensor. I understand that 24MP is a convenient size — the same 3×4 proportions as 12MP and 48MP, not much worse for image processing or storage, with appreciably higher quality and detail than 12MP — but not how Apple generates these synthetic pixels without scaling blurring out all that lovely detail. For reference here are trivial images I took using each of the 6 ‘native’ zoom/crop options offered by my iPhone 15 Pro (non-Max).
Naturally the 0.5x and 3x images are 3,024 × 4,032, because those camera sensors are 12MP. But the 1.0x, 1.2x, and 1.5x images are each 4,284 × 5,712 = 24MP, extracted from some or all the 48MP Main sensor pixels (1.2x and 1.5x crop out some pixels to effectively optically zoom in).