We've sent a verification link by email
Didn't receive the email? Check your Spam folder, it may have been caught by a filter. If you still don't see it, you can resend the verification email.
Started November 11th, 2016 · 10 replies · Latest reply by stomachache 8 years ago
Hello!
I am a 2nd multi-media student at Confederation College in Thunder Bay, Ontario. I am currently studying sound fx and Foley in my Sound & Motion class. My professor advised me that the freesound community would be a great place to post my work and have it critiqued by users like you. I would highly appreciate anybody who gives me critique and/or criticism on my sounds. they can be listened to here.
Thank You,
Hubbard
comment;
general for every sound and always;
Make a fade-in and -out on every sound (40ms or less, two video frames). This prevents the clicks of the audio.
Nornalise every sound on -3dB. Background sound normalise on -20 dB. Very useful if you're editing your movie.
Sound are clean. Prevent having to remove noise with Audacity. A good quiet environment and good mic is the most important. to much cleaning makes sound unnatural.
https://freesound.org/people/ahubbar1/sounds/367999/
If you edd a picture on your upload make it not wider than 450 pix in freesound. ;-0
the razer is on the left channel. Delete the richt channel and make the left channel mono.
install the microphone as close as possible to the razer. that gives a better signal noise ratio. if you publish a picture it should be public domain. Do not just copy it from the internet.
https://freesound.org/people/ahubbar1/sounds/362382/
when you upload an up and down zipper recording on differend zipping speed, create still moments (and fade-in -out) between the different parts. Easy for others to cut out there wanted zip.
good luck with you study
Another small note to add to klankbeeld's excellent advice; Many if not most natural sounds have a natural "tail", where they fade out. Be careful not to clip this off - make sure the level has reduced to the noise floor (at least -60dB or -70dB) before the final de-click fade starts. I've heard many otherwise fine recordings which are spoiled by an unnatural decay which is not always masked in a mix.
Good luck with your career in sound.
Wibby
Hi toiletrolltube,
I used to be an analogue audio electronics designer, first in 1980's with Midas mixing consoles and later with a company I co-founded called BSS Audio, so my experience is purely technical. I'm not a real "sound man" at all, but I did learn to relate "what-I-hear" with the theory and measurements on my test equipment. At the end of the day it's what your ears tell you that's important, and the "measurements" simply give clues as to what might make things (even) better. I've listened to great recordings on Freesound that were made with competent but middle-of-the-road gear that knocked spots of stuff made on esoteric high end equipment. That's because one guy (on a budget) knows how to use his ears and the other other guy (with the money) just trusts the "specifications".
I figure you're an ear man, and decibels aren't that important.
Regards, Wibby.
p.s. I'm a naughty boy!
I was a bit lazy to use decibels, which are a ratio without a reference, to describe signal levels, so before I get told off by the Freesounders who understand this stuff, let me add the reference usually used in digital audio where 0dB is the maximum level before the signal clips. So -60dB(full scale or clip) would be 1 million times less power than full headroom, and -70db(clip) would be 10 million times less power.
Actually worrying about this numbers stuff is a complete waste of time - just turn the monitors up and decide where the signal is getting lost in the noise.
I won't take offense if anybody corrects my assertions. I went to school a long long time ago in the days of thermionic valves and 600ohm transmission lines!
Wibby.
toiletrolltube wrote:You're right, I am ears, - albeit old and tired, but it can be hard when speakers and headphones tell you different things. How does this sound to someone else? Questions etc.
Hahahaa.. I sure get that!! in fact my left ear and right ear tell me different things with the same speaker - and the last job I did before I retired was designing the electronics for Quested Studio Monitors! But you're right; when you listen to speakers you're really listening to the room so everyone's experience is different. (and I only sing in the bathroom when no one is listening.)
Wib.
toiletrolltube wrote:
Wibby, that's always one that gets me every time: how long/short to make the fade (even/mostly with unnatural sounds) 1 second 2, 4, etc. Fade is however very important in a freesound sample and often overlooked. Thanks for your advice, I'll try to figure out the dB . Maybe you're coming from a different audio perspective, I don't know.
I think it is not just the case of how long or short the fade should be. Natural sounds (and many other things) decay exponentially but many fades are linear, so they just sound wrong even if they are otherwise lomg enough...
Absolutely right AlienXXX,
That's one of the failures of basic algorithms in digital processing. When I use the simple fade function in Audacity I block the fade period and apply fade 2 or 3 times to get the last bit of the tail to persist enough. Digi processing also has the same potential weakness in the frequency domain where the sample "buckets" don't resolve logarithmically up the spectrum. To simulate a natural EQ you have "Guess" many in-between points on the curve. I've only ever used Audacity, so I assume that more sophisticated editors must tackle this better.
I'm from the analogue generation where I put my trust in resistors, capacitors and inductors. They obey the natural rules straight out the box! Having said that, I'm not knocking digital processing; It does stuff that is totally impractical with (affordable) analogue.
Wibby
Hi strangely_gnarled
If I understood what you mean, that gives you a power law decay (x^2, x^3, etc...) instead of exponential. Not perfect, but much better than linear...
Spectral processing has a couple of downsides. Typically the split would be logarithmic. I think even in Audacity it is logarithmic. - each octave being split into a fixed number of buckets.
This does not work too bad if you are processing sounds offline: in theory, you could have as many frequency buckets as you like...
If you try to do it in real-time, the biggest problem is that the equations need to 'see' a considerable portion of a full cycle of a wave in order to capture that frequency component. - for the real low frequency sounds, down to 20Hz, this gives a delay (or latency) of ~50ms. Which is a big problem for realtime sound processing.
Because the equations need to 'see' enough of the sound to determine if a frequency of 20Hz is present or not, and the sound is being fed to the EQ in real-time (imagine you are EQing a vocal or an instrument), then you have to wait those 50ms, regardless of the speed of your CPU. Then you add on top the additional delay required for the CPU to do the maths
That is why many filters/EQs model the equations of the actual electronic circuits rather than using FFT maths - because for those you can achieve much lower delays.
The buckets introduce another problem - the blurring or transients.
But I have already gone way off-topic here, so I will shut up...
Your keyboard typing and door sound have a lot of bass and rumble. Filtering out the low end would help, and perhaps you had the microphone too close. Your razor sound is panned to one side; it seems like you unnecessarily recorded to a stereo track.