We've sent a verification link by email
Didn't receive the email? Check your Spam folder, it may have been caught by a filter. If you still don't see it, you can resend the verification email.
Started December 18th, 2009 · 2 replies · Latest reply by strangely_gnarled 14 years, 11 months ago
Is there a way to clip a range of frequencies out of a file and into another?
I.e. Recording the high notes in the background vs a low narraration voice.
I have audacity and soundbooth cs4. If you need any more info just let me know
EDIT > I figured out how to clip it in the frequency range (yellow-purple thingy) but i cant do it from within the waveform. You use the rectangular marquee tool on the frequencies, but i want to figure out how to use them in the waveform
(Soundbooth CS4) I can select the whole waveform, but i just want the dead center of the waveform, is this possible?
My first suggestion with Audacity (I don't know Soundbooth) would be to try as follows.
1. Duplicate your track twice, one renamed, say, "High", the other "Low". Keep the original track unedited so you have a reference for comparison monitoring
2 select track "High" and apply the "High pass filter" effect. For max separation I would suggest starting with slope=48dB & q=0.7 at your selected corner frequency
3 ditto track "Low" with the "Low pass filter"
(If these effects are not available in your version of Audacity they can be downloaded and installed very simply from the LADSPA link on the Audacity web site- http://audacity.sourceforge.net/download/plugins.)
4 Use the mute and solo buttons to A-B monitor the original with a mix of the two tracks you have just created. Try different filter settings and listen to the results until you are happy.
5 Save to new files.
Note there will likely be some subtle differences noticed when monitoring due to the heavy phase errors introduced by the filters at the corner frequency. This can be more noticeable psycho-acoustically on the human voice than on most ambient sounds (whose natural phase shifts are already mashed up by reflections and room resonances etc.) Using a lower filter slope will reduce these phase errors but also reduce isolation between voice/background.
Unfortunately I think it is next to impossible to totally isolate sounds in an audio stream if they overlap in frequency and at least one of the sounds is not repetitively coherent, but I would be quite happy to be shot down if one of the experts on this forum who actually studies this stuff wants to contradict me.
Good luck Alex
Wibby.