We've sent a verification link by email
Didn't receive the email? Check your Spam folder, it may have been caught by a filter. If you still don't see it, you can resend the verification email.
Started April 3rd, 2008 · 24 replies · Latest reply by hesed23 16 years, 7 months ago
I was wondering whether maximizing volume on files I post is good or to just leave them alone and let the user decide.(by maximize I mean raise the volume but not to the point where its distortion)
My thought is giving a little more volume boost to recordings so the noise is heard by a listener where it might otherwise sound underwhelming unless they take the time to boost the volume themselves.
I guess I'm just wondering if I do something undesirable to the sound by raising the volume vs leaving it unprocessed and letting the user decide what to do.
please do not do this for any sound, as anybody using your sample for anything remotely professional will need the headroom of a well recorded sound. Raw is always the way to go. Even if you think it needs a little processing, leave that up to the person who uses it, as they can tweak it to suit their needs.
I agree that processing or using any type of compression to boost the overall level of a sound is undesired here. The end user can not undo the compression/limiting. I on the other hand believe normalizing is just fine for here. All that does is raise the level of the sound according to the highest peak in the file. This should be done on a good stereo editor that uses a 32 bit floating point to do the math. The end user can simply lower the volume if they need later when they use the file.
The good thing about normalizing the file before it is sent to freesound is that the level will be at a decent level for the preview. Some sounds on here are very very quiet. It also give a little bit better picture of the sound in the spectrograph for anyone doing a quicky eye scan over sounds that look like the shape of what they need. Of course listening to all of the sounds is the best solution but I sometimes find myself looking at the spectrograph to get a quick idea of what the sample will be like. The 32 bit float is important. Some programs, (namely pro tools) does bad things to the audio when any math is applied to the signal. Most listeners will not notice but it seems to get a little more edgy or harsh the more math you introduce to a signal.
To sum it up (no pun intended) normalizing is ok, compression/limiting/processing = bad!!!
ejfortin
I agree that processing or using any type of compression to boost the overall level of a sound is undesired here. The end user can not undo the compression/limiting. I on the other hand believe normalizing is just fine for here. All that does is raise the level of the sound according to the highest peak in the file. This should be done on a good stereo editor that uses a 32 bit floating point to do the math. The end user can simply lower the volume if they need later when they use the file.The good thing about normalizing the file before it is sent to freesound is that the level will be at a decent level for the preview. Some sounds on here are very very quiet. It also give a little bit better picture of the sound in the spectrograph for anyone doing a quicky eye scan over sounds that look like the shape of what they need. Of course listening to all of the sounds is the best solution but I sometimes find myself looking at the spectrograph to get a quick idea of what the sample will be like. The 32 bit float is important. Some programs, (namely pro tools) does bad things to the audio when any math is applied to the signal. Most listeners will not notice but it seems to get a little more edgy or harsh the more math you introduce to a signal.
To sum it up (no pun intended) normalizing is ok, compression/limiting/processing = bad!!!
I was just about to tell those exact points why I like normalization !
Except that I sometimes do limit a bit before normalizing, when the peaks are really too high. Not ideal...
I applied a little gain, from +3 to +9 db, to some of my field-recorded samples that had very low volume, to offer a better preview in Freesound - somewhere I found advises to do so. on the other hand, when I tried to normalize a sample I perceived something wrong, like less difference beetwen low and high volume peaks - to my ear adding gain sounds better.
am I right, technically speaking? is normalizing better than adding gain? and why?
I'm maybe making wrong use of tech terms, still learning about audio-tech, almost a newbie :wink:
Normalizing is the same as adding gain. What normalizing does is finds the highest peak in your selection and raises the entire level of the sample to where the highest peak will be at -0.0db or whatever you set the normalizing to. So what is actually happening is you are applying gain to the signal to make the highest peak 0.0. In the end normalizing = gain, but more like a smart gain that doesn't let you clip the signal if you add too much gain. I usually normalize to -3db in most cases. For somethings I normalize all the way to -0.1 db but never to -0.0db. I don't trust the computer to make the correct calculations 100% of the time. LOL
that's all well and good, but I would never download and use, in a professional environment, samples that have been normalized. When you record anything, unless it's analog, you should be WELL away from peaking, usually peaking between -12 and -18 dB. So when I download a new sample to use, I'd like it to be in that range, as though I just recorded it. That way, in the sense of your gain structure, putting many samples and recordings together will sound better than the same samples normalized before mixing.
I realize that the previews sound great on freesound when you normalize, but honestly I'd rather have to turn up the volume on my computer than download a normalized file. Perhaps the previews in freesound could be automatically normalized, so you guys wouldn't have to worry about this?
Normalizing is nothing more than multiplication. And multiplication can be undone -- perfectly -- by division. So long as you don't clip, multiplying a number by another nomber doesn't hurt the first number, and the first number can be recovered perfectly by a division.
I say "perfectly", but of course we are dealing with digital numbers, which are not at all like real numbers.
See, for example this:
http://cowboyprogramming.com/2007/01/05/visualizing-floats/
In any case, balking at a normalized file with no detectable clipping is nonsense.
smcameron
Normalizing is nothing more than multiplication. And multiplication can be undone -- perfectly -- by division. So long as you don't clip, multiplying a number by another nomber doesn't hurt the first number, and the first number can be recovered perfectly by a division.I say "perfectly", but of course we are dealing with digital numbers, which are not exactly like real numbers.
See, for example this:
http://cowboyprogramming.com/2007/01/05/visualizing-floats/In any case, balking at a normalized file with no detectable clipping is nonsense.
smcameronI say "perfectly", but of course we are dealing with digital numbers, which are not at all like real numbers.
In any case, balking at a normalized file with no detectable clipping is nonsense.
Agreed, this is why I reiterate the importance of the 32 bit floating point in doing any type of gain (multiplication). It will give you the most accurate representation with very little rounding errors.
thanks ejfortin and everybody here! precious advices, more than dozen of manuals . I try to summarize, thinking about a non-pro sound management like mine:
- unprocessed samples are always the best choice, even with low volume
- normalize never clip the signal / excessive gain could clip
- a range between -12 and -18 dB is ok, expecially for further mixing
- never push normalize to 0.0 dB, maximum -3 / -1 dB
some doubts still remain:
tweeterdj wrote "When you record anything, unless it's analog...":
do you mean, any source not coming from a digital device? like field recording?
ejfortin wrote "the importance of the 32 bit floating point":
do you mean the operative system or the software? is Audacity 32bitfp? and Wavelab?
btw, the range between -12 and -18 dB is the same I've found in many cd tracks. in another FS forum I've read about a new style of mastering, where the preference goes to a thinner peaks range. but, maybe, this is about compression and not normalizing...
hope to not bother you all - the item is so important and I'm so ignorant
yes, both wavelab and audacity can perform edits with 32 bit floating point accuracy. 32 bit floating point has nothing to do with the operating system, it is the software that makes the 32 bit floating point edits possible.
When tweeterdj was talking about recording to an analog device he was referring to the tape saturation and compression that is in some cases desirable. Pushing an analog recording device into "the red" can cause harmonic distortion what can sound quite nice on certain recordings. For the most part though most people nowadays are recording to a digital device, not a tape based machine. And remember a dat recorder, even though it uses tape, records digitally. Clipping digitally does not create the same warm harmonic distortion. Pushing a digital device into the red causes harsh unsatisfying distortion that must be avoided.
glad to help, if you have further questions just post.
Actually, if one cares about retaining the highest audio quality, normalization is bad. Normalizing is not a perfect process, especially for files recorded less than 24bits. You'll get more noise from rounding errors.
To get your files peaking at a reasonable level, it is better to use some sort of good level maximizer that operates at a high internal bit rate. And then just don't drive down the gain reduction enough to touch the dynamics of your source. That will bring the sound up w/o the problems of normalizing.
I disagree. The internal bit rate of a plugin or level maximizer is never going to exceed 32 bit float. Using a level maximizer you risk unwanted compression that may occur if the threshold is set too low. The level maximizer uses multiplication to boost the signal, just like normalizing, but it also has the capability of compression if the signal hits the threshold. If we are looking for a good general rule of thumb to go by I would suggest that people stay away from level maximizers. Yes they make the signal loud, but this is accomplished at a cost. Some maximizers have a hidden expander or noise gate that make it seem like it is getting louder without introducing extra audible noise to the signal. This is what stomachache might be thinking of. Again, once this is done it is not possible for the end user to undo this function.
Anyone else want to put their two cents in on this? I welcome any other opinions to help create guidelines for people with questions on processed or unprocessed sounds for uploading.
I've had a quick chat with Ross Bencina ( http://www.audiomulch.com ) and the conclusion is: this is a pretty hard question! We THINK that there MIGHT be some loss when normalizing, but we need someone who knows even more about audio dsp to help. This loss (if any) does not depend on if you are using 16 or 24 bit, only the relative error will be smaller at 24 bit. (losing one bit at 16 bit is worse than losing one bit at 24 bit).
ejfortin: many audio effects these days use 64 bit "double" internally (and FPU's mostly use even more bits internally for floating point operations), most hosts use 32 bit floats for all audio. There is a BIG difference between maximizers/limiters/... and normalization though. In general I think it's a REALLY bad idea to use any type of post-recording digital dynamics processing on "purist" field recordings. And, in general it's a really, really bad idea to make any kind of assumption about the internal signal-path bit-depth of signal processing applications.
yes, both wavelab and audacity can perform edits with 32 bit floating point accuracy. 32 bit floating point has nothing to do with the operating system, it is the software that makes the 32 bit floating point edits possible.
... it actually has very little to do with software and a lot with hardware.
It's been a few years, but I'll subscribe to the music-dsp mailinglist and ask the normalization question there.
- bram
AndyH-ha - "for a single normalization the error is more theoretical than actual in that it will never approach audibility"
I agree with this statement. Yes, there will be rounding errors when normalizing but the errors are so minute that they will never be audible even to the most trained ears. Normalizing or adding / subtracting gain 10 times repeatedly will cause less audible flaws in the signal than in a single pass with other types of level maximizing or processing. Anything like compression, limiting, maximizing, all do more than a simple multiplication to every sample. They squash the signal by reducing the values of samples over the threshold, and add gain to boost the overall level of the signal. Yes this makes the signal "louder" but also introduces other AUDIBLE audio artifacts.
In my experience the simplest signal path is always the safest. A simple normalization in my mind will always be the best route. Yes, to the purist leaving the signal untouched is the best route, but not always the best in terms for previewing on this site. I think that the minute and INAUDIBLE rounding errors that may be present after a normlization are well worth being able to hear the sample here on the site. Heck if I want to listen to some of the samples here at a reasonable level I have to download them and then normalize them myself just to take a good listen.
Sounds like a great idea!!! I hope it can be implemented.
The only problem will be the fact that I am not able listen to the previews on the site most of the time. I hit the play arrow and nothing happens usually or nothing happens for quite some time. I usually give up on the preview rather quickly and go straight to listening to the .wav full format sound to take a listen to the sound. Anyone have this same problem? It's not always that I have this problem, but often enough where I usually go straight to listening to the wav file. Any ideas Bram?