Cleaning up samples (pro help needed)

Started by LDAsh, July 03, 2016, 06:36:04

Previous topic - Next topic

LDAsh

My brother once told me, in relation to audio engineering, "you can't polish a turd".
I know almost nothing about the deep technical methods and terminology.

What I basically (think) I need to do is clean up all of the "air" from my samples.  I often compose very heavy, thick music with many different instruments doubled up and many channels.  What I normally end up with is a very "muddy" sound.  It's hard to explain.  I'm also a bit tone-deaf and tend to really reduce the volume of high-pitch trebles and bass it up.  I assume this is due to all of my samples, despite being high bitrate/hertz, not being clean from the beginning.  Many of them were recorded from real instruments.  What I'm getting is an accumulation of all of this useless air from my samples giving the final mix a grubby muddled sound that just can't be fixed.  The air is not noticeable when listening to each sample individually, but all stacks together when mixed.  That's my amateur assumption.

Like I said, I have no idea how to properly refer to anything, but I am very willing to learn.  My approach that I'm trying is to play with equalisation, testing the samples while isolating different frequency ranges, and finally lowering (zeroing) anything (low-bass or high-treble) that seems to be nothing but noise.  This helps, but not enough, I feel.  I try some filters called "hiss reduction", "reduce hum" and "compressor", but this sometimes can do more harm than good under my amateur fingers.  I've heard a lot about using compressor filter, but I normally end up losing the subtle attacks and decays, which I don't want to lose, so I'm obviously not doing something correctly.

So I'd really appreciate if anyone can give me some pro advice, with some audio engineering knowledge, on how I can go about trying to "polish a turd". :)

While I'm at it - I'm also especially interested to learn the best ways (pro techniques) concerning what I should do before and after trying to convert say 24bit@48KHz samples down to say 8bit@11KHz samples, for use on legacy hardware to play in a real-time game engine.  I think this subject is related because I get very muddy, noisy results, teetering on the unacceptable.

Thanks for your time! :)




LPChip

It is possible that your samples are not as high quality as they can be, and thus have some noise in them which creates muddyness. It could be. Then again, if you would replace the samples with HQ VSTi, chances are you still get this muddy feeling.

The reason is, that you basically already mentioned this: You remove the high end and add bass. Depending on where you add the bass, this is going to get you muddy sound for sure.

What you need to do is look into mixing and mastering a track. It explains where you want to apply EQ and what happens if you do it at the wrong places. For example, Adding EQ to the mid-section of the spectrum will produce muddyness. You will want to reserve this space for that what you really want to make stand out, such as the lead singer, or other lead and possibly the snare drum. The more you put in here, the worse it can be. Panning can help you a bit, but only ever so slightly. The range here is between 200hz and 800hz. Try to put as little in this range as possible and if you have to have lots in there, use different EQ's on each instruments to divide them with as little overlap as possible.

The same kindof applies to the 800 to 6k range where the human hearing is. It won't produce muddyness, but you will want to be careful here as well. Use only subtle changes such as peak spikes, rather than whole ranges.

The biggest place where Muddyness will occur is between the 600 and 800 hz though.
"Heh, maybe I should've joined the compo only because it would've meant I wouldn't have had to worry about a damn EQ or compressor for a change. " - Atlantis
"yes.. I think in this case it was wishful thinking: MPT is makng my life hard so it must be wrong" - Rewbs

Saga Musix

QuoteIt is possible that your samples are not as high quality as they can be, and thus have some noise in them which creates muddyness. It could be. Then again, if you would replace the samples with HQ VSTi, chances are you still get this muddy feeling.
I simply cannot let this stay here as it is.

First: Noise does not make your mix muddy (in fact, special noise like dithering will make the end result sound better). Instruments fighting for the same frequency ranges do that.
Second: VSTis are not a magic solution. High-quality samples could be just as good or even better.

If your track sounds muddy, try identifying the individual frequency ranges where each instrument is most dominant using a spectrum analyzer, and check if that clashes with any other instrument. You can either try fixing this conflict by EQing the sample, or if it's possible in your track, by putting an EQ on the track/instrument itself. When doing so, remember that you should always take away frequencies rather than adding them, because otherwise you will just amplify the noise in your sample. Compressing samples can also help avoiding having to EQ them heavily.
Sample sources are definitely a big problem, if you have bad samples you are not lost but you will have to put a lot of effort into making them shine (your brother is wrong here).

A lot of this comes from experience and experimenting, and seeing how other people do it, so I'm afraid there is no simple solution. But in the end, it's always about stopping instruments from fighting for the same frequency ranges - even when using instrument plugins.
» No support, bug reports, feature requests via private messages - they will not be answered. Use the forums and the issue tracker so that everyone can benefit from your post.

LPChip

My bad on wording it improperly. What I meant with bad samples with noise, is that because the sample has noise in specific frequencies, and they queue up, it could be that these are in the 600-800hz range and thus effectively cause someone to boost these frequencies. But I agree that noise can also be like dither and thus work the other way around.

So thanks for clearing that up.
"Heh, maybe I should've joined the compo only because it would've meant I wouldn't have had to worry about a damn EQ or compressor for a change. " - Atlantis
"yes.. I think in this case it was wishful thinking: MPT is makng my life hard so it must be wrong" - Rewbs

Brozilla

#4
Too much ambience? In all seriousness I had a similar problem with FL Studio. Aside from better samples I also increased the use of effects, particularly "Soundgoodizer" which appears to be some form of a preset EQ which quite frankly makes stuff sound good. Rather than directly targeting the problem there are ways to increase clarity. 4-16khz regions generally contain overtones essential for a "bright" sound. Recording a viola and messing around with the EQ may allow it to sound much like a violin in timbre, the vibratory patterns (which themselves vibrate at certain frequencies) affects the instrument's voice. Like stated previously by Chip muddiness generally comes from 200hz-800hz as the fundamental frequencies of many instruments (that is non-bass notes) fall within that range, higher up you can go into "high" notes or overtone/harmonics.

Without a little excerpt it's not entirely trivial to know whether it's "true" muddiness or lack of brightness. Haven't used an EQ in awhile but it generally shaped itself a bit like a 'u'. The shape is an exaggeration but it peaked at 30hz/16khz and trough at around 250/500hz. Certain drums tend to lie around 500hz and 1khz, I don't recall them sounding weaker but generally less realistic.
I recall you also used multi-sampled instruments but you can improve the brightness by offsetting the sample ranges. Example you've got a sample at A4 (~440hz), by allowing that sample to play into higher frequency ranges it's far less likely to sound muddy than vice-versa, however you risk it sounding more quantized in doing so. This is especially true if the sample contains vibrato, string players (at least I do) generally play vibrato when they want (usually most of the time) and it often oscillates at fixed frequencies but can be dynamically applied as well but this is another topic on its own.

Quote from: LDAsh on July 03, 2016, 06:36:04
While I'm at it - I'm also especially interested to learn the best ways (pro techniques) concerning what I should do before and after trying to convert say 24bit@48KHz samples down to say 8bit@11KHz samples, for use on legacy hardware to play in a real-time game engine.  I think this subject is related because I get very muddy, noisy results, teetering on the unacceptable.
Thanks for your time! :)
IMO you should always downsample to an arbitrary frequency. Most audio engines should have a resampling algorithm (linear at a minimum) so the specific frequency shouldn't matter all too much. Bass/Bass-like instruments could get away with lower frequency rates as their fundamentals are likewise lower. You will lose some color that comes from overtones but it's far less significant compared to treble instruments and especially percussion like the glockenspiel & vibraphones. DC offset removal is often a good friend, doing it before and after downsampling.

Most romplers/sample-based systems are limited more by their RAM & storage space than actual sample frequency. Both the NDS and SNES output audio at 32khz but the NDS has much more RAM (AFAIK samples are stored in a pooled user-defined region from 4MB.) and as a result you've got the choice of using more samples/instruments. The SC-88 synthesizer is also a 32khz device, listening to any of the devices you'll take notice that samples are of great importance and your RAM allocation will be very important. The NDS makes use of linear resampling so it's better to either use higher frequencies and/or more samples, that is multisampled instruments. The SNES uses a 4-point Gaussian Filter and is FAR less likely to suffer from aliasing. The PSX soundchip shares many similarities to the SNES chip, the NDS generally outputs a cleaner sound as a result despite a lower frequency rate (we're assuming sequenced music on the PSX which is 44.1khz, not Redbook CD audio.)
44.1 vs. 48khz sampling rate

Exhale

Hey, maybe this will help,

http://www.vst4free.com/free_vst.php?plugin=DeHarsh&id=2209
It has helped me out a few times.

...but if things are getting muddy that is a pure levels job (I've been learning a little bit more about mastering and mixing, sort of re-learning some of it too)
___________________
The turtle moves!