9 min readfrom Photography

Neither pixel size nor sensor size help low light performance (it's always been light)

People used to say high ISO causes noise. That was wrong. ISO is just gain applied to the signal. The noise was already there from not having enough photons. Cranking ISO doesn't create noise, it makes existing noise easier to see. Most photographers understand that now, which is good. But then a very similar mistake took its place. The new consensus became "sensor size is what matters." Full frame beats APS-C beats Micro Four Thirds beats phones. The larger the sensor, the better the image. And the related claim that bigger pixels collect more light, so fewer larger pixels are better than many small ones. This is the exact same structural error as the ISO myth. Sensor size is correlated with image quality, but that does not make it the cause.

Strip the problem down. Forget pixels entirely. Imagine a continuous sensor, no discrete photosites, just a surface that records every photon that hits it. This is basically what film is. Now ask: what determines how much light this surface receives? The sensor is passive. It does not reach out and grab photons. It sits there and records whatever arrives. For a given scene brightness, exposure time, and lens transmission, the variable that controls how many photons reach the sensor is the entrance pupil of the lens. The physical opening through which light enters the optical system. Bigger entrance pupil, more photons collected, better signal-to-noise ratio. The sensor contributes nothing to this equation.

Now go to the opposite extreme. Imagine a camera with 4 pixels. Not 4 megapixels, just four individual photosites covering the whole sensor. Same lens, same entrance pupil, same total light as a 50 megapixel camera. The image is not darker or noisier, the same number of photons arrived. But you can barely distinguish anything. You get four blocks of average brightness. Increase to 16 pixels and large shapes appear. 64 and you get recognizable structure. As you keep increasing, each step adds spatial detail, but the image does not get brighter or cleaner at any step. You are just slicing the same photon field into finer spatial bins. Pixels determine resolution. Entrance pupil determines light. These are fully independent. (To fully resolve a spatial frequency, your pixel pitch needs to be at most half the period of that frequency. This is Nyquist.)

That is where the "bigger pixels collect more light" argument falls flat. Yes, if you hold megapixel count constant and increase sensor size, each pixel gets physically larger and collects more photons individually. That observation is true. But it does not prove what people think it proves. Take two full-frame cameras, one at 12MP and one at 100MP, same lens, same exposure. The 100MP camera has much smaller pixels each collecting fewer photons, but the total light across the entire sensor is identical though. It's the same entrance pupil, so it must have the same photon count. Downsample the 100MP image to 12MP and the two are effectively indistinguishable in noise performance. The smaller pixels did not hurt anything. They added resolution without costing you light. Read noise is real, so this is not a universal law in every regime, but with modern sensors where shot noise dominates, the "small pixels are inherently worse" line is much weaker than people think. (Think orders of magnitude)

The place people usually get fooled is cross-format comparisons at the same f-number. They hear "f/2.8 on APS-C" and "f/2.8 on full frame" and assume the aperture opening was the same. It was not. f-number is the ratio of focal length to the diameter of the entrance pupil. Here is a worked example so you can see exactly what goes wrong.

Shoot the same scene at f/2.8 on APS-C and f/2.8 on full frame, same field of view. APS-C needs a 35mm lens to get that FOV. Full frame needs 50mm. f/2.8 on a 35mm lens (APS-C) is a 12.5mm entrance pupil. f/2.8 on a 50mm lens (FF) is a 17.9mm entrance pupil. The full-frame system just collected over twice as much light and nobody noticed because the f-number stayed the same. f-number normalizes for irradiance, photons per unit area on the sensor, not total photon count. So same f-number means same brightness per pixel, but the full-frame sensor has more pixels across a larger area collecting more total light.

People see both images shot at f/2.8, same brightness, notice the full-frame image is cleaner, look at the cameras, see one sensor is bigger with bigger pixels, and conclude that sensor size or pixel size caused the difference. But the entrance pupil was different. That is what they missed. The pixels are bigger on the full-frame sensor, sure. But they are bigger because the sensor is bigger, and the sensor is bigger because you needed a longer focal length for the same FOV, and a longer focal length at the same f-number means a larger entrance pupil. The pixel size and the entrance pupil size both grew for the same geometric reason. People see the pixel grow and the image improve and draw a direct arrow between them, missing that both were caused by the same upstream variable.

This is also why the equivalence principle works. APS-C at f/2.8 is "equivalent" to full frame at f/4. The community knows this but often misunderstands why. APS-C is about 2.25x smaller in area than full frame. If both sensors have the same pixel density, the full-frame sensor has about 2.25x more megapixels. Same FOV, same entrance pupil, same total light. But the full-frame sensor at f/4 has lower irradiance, so each pixel is dimmer. People see that and think the full-frame image must be worse. "The pixels are getting less light, you need to boost ISO, that adds noise." This is wrong, and the math shows exactly why.

APS-C: 100 pixels, 10 photons each. Shot noise per pixel is sqrt(10), about 3.16. SNR per pixel is 10/3.16 = 3.16.

Full frame at f/4: 200 pixels, 5 photons each. Shot noise per pixel is sqrt(5), about 2.24. SNR per pixel is 5/2.24 = 2.24. Looks worse so far.

Now downsample the full-frame image to match APS-C resolution. (Remember that at a fixed pixel density, the larger sensor means more MP) Signal is still 5, but noise also averages. Two independent noise sources of sqrt(5) each, averaged, gives sqrt(10)/2, about 1.58. SNR is now 5/1.58 = 3.16. (Signal sums, noise adds in quadrature.)

Apply 2x gain to match brightness. Signal becomes 10. Noise becomes 3.16. SNR is 10/3.16 = 3.16.

APS-C without any manipulation: signal 10, noise sqrt(10) = 3.16, SNR = 3.16.

Identical. The gain did not cost anything because it was applied to a signal that was cleaner to begin with. The photon budget was already set by the entrance pupil the moment the photo was captured. The gain step afterward does not rescue a noisy image or ruin a clean one. It just scales signal and noise together.

So the equivalence principle is not some mysterious coincidence. It is conservation of photons. Same entrance pupil, same light, different distribution on the sensor plane, fully recoverable by binning.

The obvious follow-up question is: if entrance pupil is all that matters, why not just put a huge lens on a tiny sensor and collect the same light as medium format? Well, you can, up to a point. But there is a hard physical limit, conservation of etendue, which comes from the second law of thermodynamics. It is the same reason a magnifying glass cannot heat something above the temperature of the sun.

When you compress a large light cone onto a small sensor, the angles of incidence have to increase to compensate. You can keep adding lens elements to compress the image further and each one works, but each stage costs angular headroom. The light cone hitting the sensor gets steeper and steeper. There is a hard ceiling. Light can only arrive from a hemisphere, 2pi steradians, corresponding to roughly f/0.5. Beyond that, photons would have to arrive from behind the sensor, which is physically impossible.

This is why larger sensors exist. They are not better at capturing light in some mystical sense. They are the only way to accept more light once you have exhausted the angular budget. You cannot squeeze the photons tighter, so you expand the recording area. Larger sensors make it more practical to build systems that accept more total light for a given framing. That is a real advantage, but it is an optical system advantage, not proof that sensor size itself is the fundamental cause.

Here is a thought experiment that ties all of this together. Take a full-frame camera shooting at 70mm f/5.6. Entrance pupil is about 12.5mm. Now take a tiny 1/2.3" sensor, about a 5.5x crop factor, with 50MP crammed onto it at roughly 0.75um pixel pitch. Pair it with a 12.5mm f/0.95 lens. Entrance pupil: 13.2mm. The equivalent FOV on full frame is about 69mm, so we are comparing the same framing. The tiny sensor system collected more total light. 13.2mm vs 12.5mm entrance pupil. It also has more megapixels, so it resolves more detail. Smaller sensor, smaller pixels, more megapixels, better image. The only variable that favored the small sensor was the lens. You might think "but that is cheating, you gave the small sensor a way faster lens." But this is exactly what has been happening every time you compare full frame to APS-C. The full-frame system gets a larger entrance pupil because it needs a longer focal length for the same field of view, and nobody calls that cheating. Nobody even notices. I just did the same thing in reverse, gave the smaller sensor the light collection advantage instead of the bigger one. And suddenly it is obvious that the sensor was not doing the work. It was always the lens.

The community got the ISO argument right. Now apply the same logic one level up. ISO does not cause noise, lack of light causes noise. Sensor size does not cause quality, lack of light causes poor quality. The only thing that determines how much light you capture is the entrance pupil. Everything else, pixel size, sensor size, megapixel count, is either a resolution variable or a geometric consequence of the optics. None of it is a light variable. And light is all that matters for SNR.

submitted by /u/jimmystar889
[link] [comments]

Want to read more?

Check out the full article on the original site

View original article

Tagged with

#health and wellness
#luxury photography
#fashion photography
#high-end travel
#wellness photography
#low light performance
#entrance pupil
#ISO
#photons
#sensor size
#image quality
#signal-to-noise ratio
#resolution
#full frame
#pixels
#f-number
#APS-C
#megapixels
#cross-format comparisons
#shot noise