Logistically, having input trim level adjustments physically near their sources can be as much of an advantage as a disadvantage. Particularly today, with most instruments having their own built in, powered preamplification stages (with their own output level adjustments), it makes more sense to adjust the instruments and the A/D converter input trims together in the same physical vicinity, at the stage. Often this involves communication and coordination between the sound technician and the performer whose instrument is of concern, which is easier done at the stage.
Unless a live sound configuration consists of only non-powered sources on stage (microphones and non-powered instrument pickups) and a powered f.o.h. mixing console, you will have to deal with level trims on the instruments and input level trims on the power amplifiers, none of which are readily accessible in an analog sound setup at the f.o.h. mixing console. We hear anechdotal criticisms in live sound about the inconvenience of running from the back to the front of the venue to adjust analog trims on the inputs of the A/D converters, that used to be accessible on the f.o.h. console. But this must be balanced by all the times one is working up front and does not have to run all the way to the f.o.h. mixing console to adjust these input trims! It is therefore a simple tradeoff in the worst case, and somewhat of an advantage in the best case, as pointed out in the preceding paragraph.
Keep in mind that the trim adjustments are meant to normalize levels traveling through the sound system, rather than adjust live mix levels, and is something that should be accomplished during sound checks, and not during a performance. Particularly where a stage monitor mix is employed, changing trim adjustments at the sound board during a performance is not good practice, as it affects both the house mix and the stage monitors at a time when the sound technician and performers can no longer discuss the outcome of the adjustments' impact to the stage monitor mix.
From a technological standpoint, many mixing boards built in the mid 1980s and prior had "sweet spots" of one sort or another that called for optimizing levels of signal paths to prevent significant signal to noise ratio problems, prevent (or perhaps bring out) compression effects, and so on. Mackie changed the state of the art of live sound reinforcement in the mid 1990s by introducing microphone preamps of a quality that was unprecedented at that time, in their lineup of low cost, general purpose mixing boards. Their competitors followed (and soon surpassed them). Advances and cost reduction of analog integrated circuit components also contributed along with this, resulting in even low cost mixing consoles having a more linear and clean response, largely eliminating the old "sweet spots".
Digital audio then takes this issue yet another quantum step. A 24-bit digital audio system, for example, has 144 dB of dynamic range (up from 96 dB for 16-bit CD digital audio). Once the signal is in the digital domain, numbers are manipulated mathematically. A digital mixer, for example, can operate entirely in the digital domain, and its "faders" and other control knobs are simply passing positional information about the knob or fader slider to computer chips inside the unit, like the mouse or scroll-wheel of a computer does.
The increased headroom (dynamic range) and lack of "sweet spots" means that for a system designed around AudioRail and digital audio, all input trims can (and should) be reduced to a point that securely precludes the possibility of overload (clipping) during a live performance, and that this can be done without fear of losing any perceptible sonic fidelity. This eliminates the need for the "arm's length" access to trim adjustments during a live performance.