Native/Base ISO Explained
For anyone who is interested, here is a rough description of how the true "base" ISO is determined...
Every detector has a quantum efficiency and a full well capacity. The quantum efficiency is the percentage of photons that are incident on the detector and are actually registered. Typical values these days are somewhere around 70%, depending on frequency. Some run higher/some lower, but that's a good order of magnitude. Full well capacity is how many electrons each pixel on the detector can store before it starts having to bleed them off--before it is full. Values vary depending on the size of the pixel. Smaller pixels hold fewer electrons and larger pixels hold more.
At base ISO, you setup the gain so that as you convert the voltage to a digital tone you run out of bits at about the same point as you hit the full well capacity. In other words, if you were talking about a 16 bit camera, you would reach "full well" just as you hit a brightness level of 65,535. This gives you your widest dynamic range--you could represent nearly every variation in voltage that your chip can capture. Nothing is wasted.
When you use a camera at less than base ISO, it simply doesn't amplify the signal quite as much, so you may well hit "full well" at something less than 65,535. You are wasting some dynamic range.
When you use a camera at higher gain--higher ISO--you are doing the opposite. You are adding additional amplification, so you hit pure white in your output even before your sensor is saturated.
In either pull or push scenarios you are sacrificing some dynamic range. In practice, it's a bit more complex than I am describing since nobody actually processes their detectors linearly and since anti-blooming gate technology also adds a curve. I'm also ignoring quantization errors that can become relevant in the shadows at higher ISO. But the basic idea is still valid.
By Jared Wilson