Compression means with most tools : The loudest thing quieter and the quiet things louder. So you can pump up the volume.
Each grass sprout has the same length on the lawn.
if at all, used until the very end
Yes, important mixing point you make there. Some things one should use only at the very end of the process.
Is typically valid for the function 'normalize' for example. If tracks are already normalized in an early stage to 100 % and still should be mixed together, possible distortion in the result will be inevitable. The same is valid when one wants to boost some frequencies with a filter. Keep the tracks during the working process at a max-volume of around 70 to 80 percent. So you have room at both sides.
If a track is really digital, it doesn't matter to have a lower volume along the process, amplification at the end will not produce any background noise like tape. 10 times zero is zero.
On the other hand, the volume range of a recording shouldn't be too small, because then the full resolution of dynamics is not being used.
A basic advice, just to build in some safety : Almost never use a 100 % as a setting. If you want the max, use 96 to 99.
Mixing digital data means adding up sample-values basically, so the risk of exceeding the max value is evident.
In practice :
Suppose I have 2 stereo tracks, both normalized to 98 %, in the multitrack editor to make a mixdown to a new stereo file.
Then I would preset the playback volume of each track in the multitrack editor at least at minus 5 dB to make sure that the mix
doesn't clip somewhere.
The use of reverb/echo is a nice point too. The danger of things getting muddy is also present here. Maybe I should use 'soup' here instead of muddy. Before you know it, all kinds of the same sounds float around in a chaotic mess.
Especially if more instruments make use of the same reverb/echo in the mix.
To avoid this, the main issue is : create differences. Even tiny ones.
So don't use an exact copy of the reverb/echo settings for the next track, but at least change some unsignificant variable from 72 to 74 percent.
Another soup-avoider can be to assign the reverb/echo of each track to a different panning position. So track 1 echo is between center and left, and track 2 echo between center and right.
Another cause can be the behaviour of two instruments in the same sound-spectrum. For example if the guitar sound is set to the same central frequency range as the voice to sing. Sounds will be melded together and it becomes more difficult to make a distinction between the two. Of course this effect is amplified if using reverb/echo.
Maybe it helps to be conscious of the fact that if you are mixing, you act as a producer, not as a musician or a composer.
Switching to another discipline. With different skills.