Can someone explain to me how to mix properly?

Started by SoundCrafter, January 26, 2007, 02:49:58

Previous topic - Next topic

SoundCrafter

Ok, so I'm sure most of you know by now that I've mostly moved on to Reason. However, even so, one concept that consistently eludes my grasp is mixing. Here's my prob:

My song sounds bloody muddy. I've got a trance drum set, a ducking bassline, and a echoed lead, and they all fight for dominance. Am I right in thinking that I should take these three steps to fix the mix?

1: Set the channel levels properly (How I want the balance to sound)
2: Apply Stereo Imagers to all of the instruments to give them their own place on the stereo spectrum.
3: And finally, some sort of compressor.

My problem is this. I've done steps 1 and 2, and the mix still sounds muddy. I also have no idea how to (properly) use a compressor. Can someone help me out here?
---Formerly known as ---DjBj---. changed names for lots of reasons.
BooT-SectoR-ViruZ is the new Skaven and the whole world'z goin' to Hell.
Lowpass filter! Perform a generic type of dodge!!! :lol:
Everyone should get on this forum's chatroom RIGHT NOW...still not sure why, though.

CrazyAznGamer

I don't use Reason, but I could take a stab at why it's still muddy.

Sure, you got your panning and your compressing. In order to really make the sound crisp, you should also separate the frequency ranges with an EQ, it seems.

Also, a trance drum set and an echoed lead would probably occupy large frequency ranges that intrude into each other.

I don't know how to do it in Reason, but if I mix any of my songs, I usually make them into separate tracks, put them in different channels, and equalize each of them into a different frequency range before doing the steps you mentioned.

Somebody check me on this, I'm not exactly mixing guru.

LPChip

Indeed, make sure that each instrument has its own EQ range. Usually the kick on the lowest frequency (30-100 hz) bass above it, then snare and other instruments, then high ends.

You can make the frequencies overlap, but make sure that these instruments cover that range more than the other instruments.

Also, when leveling, put your sound to a low volume or you'll get off-scale results.

If you want to get crisp result, boost small proportions on the high frequency bands. I usually do that at: 2khz, 4khz, 6khz, 8khz, 10khz.

Play with the amount on each point you make to give it the right crispness. You can easilly give it too much high this way, so be carefull there :)
"Heh, maybe I should've joined the compo only because it would've meant I wouldn't have had to worry about a damn EQ or compressor for a change. " - Atlantis
"yes.. I think in this case it was wishful thinking: MPT is makng my life hard so it must be wrong" - Rewbs

SoundCrafter

OK, I forgot to mention that I DO put an individual EQ on each instrument...I don't particularly favor the idea of an 'umbrella' EQ, but if that's what works, I'll check it out. It seems, however, that the only EQs in reason are parametric with 2 parameters, so LP's idea won't exactly work, but supposedly you can use reason's vocoder as an EQ? (Still learning).
Thanks for the tips.
---Formerly known as ---DjBj---. changed names for lots of reasons.
BooT-SectoR-ViruZ is the new Skaven and the whole world'z goin' to Hell.
Lowpass filter! Perform a generic type of dodge!!! :lol:
Everyone should get on this forum's chatroom RIGHT NOW...still not sure why, though.

seventhson

Make sure that all the elements that don't need bass actually don't have bass.
Basically highpass these elements (percussion,leadsound,pads etc.) up to 100 hz,same goes for mid (around 250 hz and up) and high (everything above say 3 khz),adding too much percussion with high frequencies can make the mix sound very harsh,so try think out what your mix needs.
Also try to think if you want the kick to have most of the bassfrequencies or if you want your bassline to have most of it,if you have a kick that has alot of subbass,try to cut some of the sub (around 50 hz) from the bassline.
You should also be carefull not to overdo effects such as delays and reverb,if you use them too much or have too much of the wet signal this easily muddies up the mix.
Volumes are also an important part of getting a good mix,but all these things are something you should be able to learn in time,it takes alot of time,practice and above all patience before one is able to get a decent mix.

Hope this helps.

SoundCrafter

You say highpass, but unfortunately it's hard for me to run a signal through a filter in Reason. But isn't an EQ nearly the same thing? Couldn't I just 'roll off' certain frequencies and accomplish the same effect?
---Formerly known as ---DjBj---. changed names for lots of reasons.
BooT-SectoR-ViruZ is the new Skaven and the whole world'z goin' to Hell.
Lowpass filter! Perform a generic type of dodge!!! :lol:
Everyone should get on this forum's chatroom RIGHT NOW...still not sure why, though.

LPChip

I have the perfect solution for you!!!!!



Use MODPlug Tracker!!!!! :nuts:
"Heh, maybe I should've joined the compo only because it would've meant I wouldn't have had to worry about a damn EQ or compressor for a change. " - Atlantis
"yes.. I think in this case it was wishful thinking: MPT is makng my life hard so it must be wrong" - Rewbs

Snu

wait, its $400 and it doesnt have decent eq? or support vsts???
well... i dont know much about how reason works, but you might want to export rendered tracks of each instrument, and use another prog (a multi track editor that supports realtime effects, especially vsts) for mixing.  i use modplug for writing and Tracktion for mixing, works quite well for me.
i cant offer much advice on mixing, unfortunately, as im still not terribly good at it.... the best advice i could give would be to make the mix as good as you can, and have someone listen to it and give you some advice (im needing that too, but cant find any real mixing gurus about).

speaking of, what the heck happened to atlantis?
he should be here posting one of his amazing tutorials!

Sam_Zen

compressing things makes the dynamics more muddy anyway.
0.618033988

SoundCrafter

@LP: Blimey, mate.

@Snu: Actually, for me, it was only $200 (Academic version, son!), but yeah, there's no VST support, and most of the users are lobbying for it in the next version. However, there's 2 parametric EQs (a simple one and a more complex one). Also you can use the vocoder as a typical graphic EQ.

@Sam: Well, I have no idea how to properly use a compressor anyway, although I'd certainly like to know. I can't say I entirely agree with you there.
---Formerly known as ---DjBj---. changed names for lots of reasons.
BooT-SectoR-ViruZ is the new Skaven and the whole world'z goin' to Hell.
Lowpass filter! Perform a generic type of dodge!!! :lol:
Everyone should get on this forum's chatroom RIGHT NOW...still not sure why, though.

CrazyAznGamer

Quote from: "SoundCrafter"@LP: Blimey, mate.

@Snu: Actually, for me, it was only $200 (Academic version, son!), but yeah, there's no VST support, and most of the users are lobbying for it in the next version. However, there's 2 parametric EQs (a simple one and a more complex one). Also you can use the vocoder as a typical graphic EQ.

@Sam: Well, I have no idea how to properly use a compressor anyway, although I'd certainly like to know. I can't say I entirely agree with you there.

Wow, no VST support... wow...
Aaanyways... compressors should be sparingly, if at all, used until the very end. Before the end, however, if you use the compressor, it'd be to reduce gain. I have to agree with Sam: compressors generally make sound muddy. Avoid it until the end, or use it to reduce volume, and then finally recompress the track sparingly.
Another thing that might be making the sound muddy is the reverb/echo. It's great for filling in tracks and giving it depth, but too much of it will screw up the sound.
Lastly, there's the actual composition itself. Hopefully, you don't have this case, where there's too much going on in the wrong places. If this is so, then it might be hard (yes, for all of us!) to cut out unnecessary junk, especially junk we worked hard on.

Sam_Zen

2 SoundCrafter
Compression means with most tools : The loudest thing quieter and the quiet things louder. So you can pump up the volume.
Each grass sprout has the same length on the lawn.

Quote from: "CrazyAznGamer"if at all, used until the very end
Yes, important mixing point you make there. Some things one should use only at the very end of the process.

Is typically valid for the function 'normalize' for example. If tracks are already normalized in an early stage to 100 % and still should be mixed together, possible distortion in the result will be inevitable. The same is valid when one wants to boost some frequencies with a filter. Keep the tracks during the working process at a max-volume of around 70 to 80 percent. So you have room at both sides.
If a track is really digital, it doesn't matter to have a lower volume along the process, amplification at the end will not produce any background noise like tape. 10 times zero is zero.
On the other hand, the volume range of a recording shouldn't be too small, because then the full resolution of dynamics is not being used.
A basic advice, just to build in some safety : Almost never use a 100 % as a setting. If you want the max, use 96 to 99.
Mixing digital data means adding up sample-values basically, so the risk of exceeding the max value is evident.
In practice :
Suppose I have 2 stereo tracks, both normalized to 98 %, in the multitrack editor to make a mixdown to a new stereo file.
Then I would preset the playback volume of each track in the multitrack editor at least at minus 5 dB to make sure that the mix
doesn't clip somewhere.

The use of reverb/echo is a nice point too. The danger of things getting muddy is also present here. Maybe I should use 'soup' here instead of muddy. Before you know it, all kinds of the same sounds float around in a chaotic mess.
Especially if more instruments make use of the same reverb/echo in the mix.
To avoid this, the main issue is : create differences. Even tiny ones.
So don't use an exact copy of the reverb/echo settings for the next track, but at least change some unsignificant variable from 72 to 74 percent.
Another soup-avoider can be to assign the reverb/echo of each track to a different panning position. So track 1 echo is between center and left, and track 2 echo between center and right.

Another cause can be the behaviour of two instruments in the same sound-spectrum. For example if the guitar sound is set to the same central frequency range as the voice to sing. Sounds will be melded together and it becomes more difficult to make a distinction between the two. Of course this effect is amplified if using reverb/echo.

Maybe it helps to be conscious of the fact that if you are mixing, you act as a producer, not as a musician or a composer.
Switching to another discipline. With different skills.
0.618033988

KrazyKatz

My experience indicates that apart from whats mentioned above, it could be because you're compressing the reverb! Apply the reverb as a seperate channel that is unaffected by the master channel compression.
Sonic Brilliance Studios
http://www.sonicbrilliance.com

SoundCrafter

Mods, mayhap move this to the Technical Docs forum?
---Formerly known as ---DjBj---. changed names for lots of reasons.
BooT-SectoR-ViruZ is the new Skaven and the whole world'z goin' to Hell.
Lowpass filter! Perform a generic type of dodge!!! :lol:
Everyone should get on this forum's chatroom RIGHT NOW...still not sure why, though.

LPChip

Quote from: "SoundCrafter"Mods, mayhap move this to the Technical Docs forum?

Aight. :)
"Heh, maybe I should've joined the compo only because it would've meant I wouldn't have had to worry about a damn EQ or compressor for a change. " - Atlantis
"yes.. I think in this case it was wishful thinking: MPT is makng my life hard so it must be wrong" - Rewbs