The 1080p version would miss out on contrast changes that happened within the pixel gaps, and that contrast would not be averaged into the pixels that did get sampled. If the 4K sensor is skipping pixels to create a 1080p image, then it is not collecting the same light as the 4K image that uses every pixel. The logic for why 4K->1080p is better should be pretty straight-forward now in this context. I need my image to come from the full surface of the sensor, not from every other pixel. You asked why I needed 4K, and getting around the quality ding of 1080p in-camera fakery is my reason why. For people like me with only one 4K mirrorless camera, these hack-job in-camera downsize methods are the only “native” 1080p acquisition options we have. I was trying to say a full-sensor 4K image downscaled to 1080p with Lanczos would look better than the same 4K sensor trying to fake a 1080p image by skipping every other pixel of the 4K sensor, or by using a low-quality 4K-to-1080p scaler. I need one paragraph to unravel the confusion, then another paragraph to actually answer your question. I think my use of the word “native” 1080p camera capture was confusing.
0 Comments
Leave a Reply. |