We've sent a verification link by email
Didn't receive the email? Check your Spam folder, it may have been caught by a filter. If you still don't see it, you can resend the verification email.
Started July 29th, 2012 · 22 replies · Latest reply by zagi2 10 years, 9 months ago
Hi Guys,
I wondered if anyone had views about where you put sounds when you're mixing a recording - assuming it's not all dead centre? The project I'm working on is a spoken word recording of a horror story. There is a 'background noise' track, being a ticking clock, and then the sound of a quiet pub, and then some exterior sounds, and then there are various scarier effects. I was wondering if there was any prevailing theory about whether, for example, the scarier stuff should feed into your right (imaginative) hemisphere, while the logical left gets the narrative? Anyone any thoughts?
You're going deep into Cognitive Neuroscience here.
Im interested by this, although I know about entrainment with certain frequencies, im not sure how independant our receptors are wired to the brain.
Here's an interesting article i found, although it doesnt directly answer your question, it relates to the subject.
http://www.wired.com/wiredscience/2009/06/earcigarette/
Hey thanks for that - yeah, i had heard via a dumbed down newspaper article that theory - that you should always sit at meetings so that you're talking into your boss's right ear... that's a good jumping off point though
Just not going too much into such details like cognitive neuroscience (-; keep in mind, that your listener will listen to the story. How long can you listen to an audiobook only on one side (in headphones) and feel comfortable? Where you imagine the story or repeat in mind (in thoughts) the words you hear? It's always somewhere in the center. Closer or farther, deeper or higher, but not left/right. You could split the vocals left/right if you had a compensation (other vocals, dialog), for example by panning or recording/positioning it as a binaural (holophonic, dummy head, plugin like wavearts panorama), but in general you need a balanced scene. Things can go around. Narration is like second and separate channel of auditory data.
Things you can do to produce semi brain entrainment are as follows. Amplitude modulation (autopanning) with phase shift (vocal will be tremollo'ized to left/right) using 4-16Hz pattern for example. Or pitch/frequency modulation (gentle/acceptable shift) using the same principles as for AM (except - you don't need to add the phase difference; vocal can be centered). Granulation (harmony adder or voice multiplication?) of vocal according to some frequency. Dynamic reverbing or echoing (semi repeated phrases) according to some parameters. These are very common techniques.
On the other hand I would focus on the way how the verbal guidance is performed. Good narrative voice can do anything with you (-;
Regarding left/right split - sure it can work, but only with short instructions, not longer stories.
Really appreciate that input - I'll look into those techniques. I think you're right though, keep it simple - I'm thinking of putting the narrator pretty much dead centre, maybe slightly to the right; background effects slightly to the left; and the really scary stuff going off in the left ear, to engage the deep recesses of the right hemisphere.
I'll post the link when I upload it (17th August)
Good luck! (-;
Recently, I lost the hearing in my right ear for a few days, due to an infection. It made me think of this thread here.
I was thought as to wether my influence from or perception of the world around me changed during the time.
It was quite strange to have only one functioning ear; my concentration was terrible, found myself focusing on high pitched noises and not being able to understand what people are saying even though i can hear the words. Is this linked to the topic at hand do you think?
Interesting... I would have thought it is linked to what we've been talking about.
Glad you're better now btw!
Head-Phaze, rather this is related to non-linearity of hearing. If speech-specific frequency spectras are affected, then you can have an impression of "hearing and somewhat not understanding". Similar phenomenon occurs when learning foreign languages (different languages operate in slight different spectras and dynamics, that why on the beginning sometimes it is so difficult to learn - you can't vocalize what you don't recognize). Sound interaction is made at least of two major components - sound perception and object recognition; speech is an example of sophistication of this mechanism because of complex (and abstract?) meaning inside.
ayamahambho wrote:
Head-Phaze, rather this is related to non-linearity of hearing. If speech-specific frequency spectras are affected, then you can have an impression of "hearing and somewhat not understanding". Similar phenomenon occurs when learning foreign languages (different languages operate in slight different spectras and dynamics, that why on the beginning sometimes it is so difficult to learn - you can't vocalize what you don't recognize). Sound interaction is made at least of two major components - sound perception and object recognition; speech is an example of sophistication of this mechanism because of complex (and abstract?) meaning inside.
Wow. Amazing reply. This makes sense with your analogy. It was an interesting experience for a short time, yet I hope that it isn't permanent some day... :-/
Hemi-sync. Brain wave entraining, consciousness altering sound.
With aging, many people lose the possibility of hearing highs (funny story told on internet is that the kids are using high frequency ringtones on their mobiles, because teachers simply don't hear them). With aging - many brains lose the ability to recognize sound objects due to neural changes (attention disorders are an example). So don't worry - there is a big adventure waiting there (-:
But the opposite side: adaptation. When you work in certain areas of perception - you adapt and become better. One interesting thing here: seeing the talking face - increases speech recognition (some say for about 20% or more?); try McGurk effect for example (easy to find on youtube). In other words - other senses can support the missing parts.
One final thought - after some time, even if hearing change is permanent - brain "reshapes" the sonic representation to make you feel comfortable; in other words - you start to hear it as "normal". As an analogy, there were some experiments (also easy to find via google) with glasses that reverse the picture upside down (or distort in some funny way). After few weeks - people started see "normally". Usually missing or changed components in perception are "replaced" in the brain (on the software side so to speak) in a way that allows people to participate in "collective agreement on commom perception".
Now up: see http://soundcloud.com/andrewcferguson/thrawn-janet-17-08-12
Cheers, thanks very much
Thanks, I will. Delving into music and spoken word combinations for a couple of months, but Writers' Bloc (www.writers-bloc.org.uk) always have a Halloween gig, so if something good comes out of that I might upload it. Or maybe a Xmas ghost story...
Thanks for listening!
only to thank you all for this really interesting discussion.
ayamahambho, BeeOne is a thrill, and your informations are precious.
I enjoyed phloord's compositions.
looking for a tool for spatial positioning of sounds but there's nothing at my hobbist level: I would like to easily move sounds near and far other than left and right when mixing samples - sorry, suppose this is out of topic here
BeeOne SMOD - demo preview - fully functional (except saving presets).
http://www.planetaziemia.net/counter/click.php?id=56
You can record your sounds via background app.
p.s.: still tracking some minor bugs and upgrading with new features.