To the extent that chance is reduced, the outcome better correlates with ability that day. Correlation means the better players that day tend to have better outcomes. Reducing chance increases the reliability of the game.
(Pedant's Paradise: In measurement theory, the concern is that a method of measurement be valid. Reliability is a technical term that does not mean validity, and being reliable is not near as important as being valid. However, the two are essentially the same in bridge, because ability means doing well in tournaments. Furthermore, because the underlying concern is ability that day, reliability seems like the better word to me -- validity raises the spectre of measuring the overall ability, not ability that day. The nontechnical meaning of reliability works fine -- the results are reliable to the extent that we can rely on them to tell us who was the best player that day.)
However, reliability is a difficult factor to consider. As a practical matter, it is equal reasonable that masterpoint awards not be influenced by reliability.
Directors should manage a tournament so as to reduce the influence of chance as much as possible. The ACBL encourages directors to do this.
Similarly, good players apparently are more likely to do well in a Swiss Teams than in a pairs event. This raises the question of whether team events deserve their increased award over pair events because of their reliability.
The answer to both speculations is no. The huge discrepancy between team events and pair events is all out of proportion to whatever increase there might be in the reliability of team games. (And, the reliability of team games might be an illusion. See the essay The "Illusion" that Team Events are More Reliable than Pair Events.) This huge discrepancy is caused by a bug in the formula. It is certainly not some sophisticated attempt to reflect reliability.
Similarly, the discrepancies across field size are far too large to be reflections of reliability. And again, the discrepancies across field size are the result of using a simple formula that unfortunately doesn't work well. The formulas do not contain any signs of an attempt to reflect reliability.
Finally, the ACBL does not justify its policies, but I see no indication that it considers reliability in setting masterpoint awards.