k y m a • t w e a k y — the kyma collective || view the current website || February 2015 Archive

/ WebHome / Sounds / Share.DiscussMultiFunctionAudioProcessing

Search


Share Section


Sounds
Samples
Timelines
Tools
Microsounds

Home 
Topics 
More... 

All Sections


Products
Order
Company
Community
Share
Learn

Login / Register 
Change password
Forgot password?
%SESSION_IF_AUTHENTICATED% Site Map
%SESSION_ENDIF%

Symbolic Sound


Home
Kyma Forum
Eighth Nerve

TWiki Links


%SESSION_IF_AUTHENTICATED% TWiki Shorthand
TWiki Formatting FAQ
What is TWiki?
%SESSION_ENDIF% twiki.org
This Sound file contains two Sounds. One is a general purpose substrate for creating up to four separate effects processors inside of Capybara. This Sound uses an 8-output block configured into 4 stereo pairs. Inputs 1&2 are processed and sent out on output channels 1&2, same thing for 3&4, 5&6, and 7&8. The substrate Sound merely contains Mixer blocks in place of your processing stuff, labeled "replace me".

The second Sound in this sound file contains an example illustrating the creation of 3 separate chains of processing. Channels 7&8 implement a stereo 8-band parametric equalizer. Channels 5&6 implement a Dolby style Spectral processor, which is a 3-band low level compressor for signals below -10 dBFS. The gains indicate the excess gain from compression at the threshold level selected in the VCS. Channels 1&2 implement a 1/F stereo crossfeed for headphone listening, followed by a BBE style bass phase bender and HF enhancer.

I use this kind of sound block in conjunction with my MOTU 2408mkIII which has 4 stereo pairs of analog in and out. I have channels 5&6 sent and returned to/from Capy channels 5&6. Channels 7&8 likewise are sent and returned from Capy channels 7&8. My 2408mkIII SPDIF I/O goes to Capy channels 1&2. Using just the MOTU CueMix? console allows flexible interconnections among the processing chains.

I also use Sonar 3 and the ASIO I/O for added processing. Using this kind of system I can perform some processing in Sonar with software plug-ins, route out to a Capy processor for more difficult and custom kinds of audio processing, then route back into Sonar for additional software plugins, and then finally out to the mixing board, possibly by way of Capy channels 1&2.

My Capy channels 1&2 simultaneously send/receive SPDIF from the MOTU 2408mkIII, and sends its Analog output to my A&H Mixing board. I also send/receive Capy channels 3&4 from that A&H mixing console.

This kind of setup allows for some killer processing that would be impossible to achieve by any other means, since Capy and Kyma are the worlds most incredible signal processors available.

Cheers,

-- DavidMcClain - 14 Dec 2003

I spent the day playing with variations of MS/LR processing for widening a stereo field, using both Kyma and some outboard gear (Behringer Edison). I was also using a 1/F L/R crossfeed for headphone listening.

I was convinced earlier that the crossover for headphone listening really needed to go after the MS/LR conversions, but when using my outboard gear it follows my Kyma processing where the crossfeed was performed. But it sounded okay either way...

So I just sat down and proved to myself that you can indeed place these two operations in either order. It actually makes zero difference. The reason for this is that both operations are examples of linear signal processing. As a result, they commute with each other, meaning that you get the same result regardless of which order you apply these operations. The situation is similar to a chain of filters. It makes no difference which filter sees the signal first.

At first this may seem strange because the crossfeed operation involves a cross mix of a filtered and delayed version of the opposite channel. You might be tempted to think that this somehow changes the spectrum so that doing MS/LR widening after the crossfeed might be different from doing it before the crossfeed. But indeed, since filtering and delay are both linear operations, the overal result is still linear all the way through. It really makes zero difference which operation is applied first.

So I can stop fretting about rewiring my studio to get the crossfeed last in the chain...

[Aha!! I see now what caused my worry... I am also using various forms of compression, gating, limiting, and expansion in the processing chain. But all of these are nonlinear operations. The result of mixing two compressed signals is certainly not the same as compressing the mix of the two signals. So where you place compression in the chain could make a huge difference in the outcome...]

-- DavidMcClain - 03 Jan 2004

One of the things I have found, when using a headphone crossfeed is that some recordings don't sit well with this kind of processing. Examples of this can be found among some of the Enigma CD's. When I switch in the crossfeed I get a substantial drop in sound levels.

This implies that there is some amount of the left channel already in the right channel in negated form, and vice versa. Indeed, when the crossfeed is switched out of the system a phase meter shows clear evidence of this, with the phase meter hanging around 90 degrees and even into the zone beyond. When this condition exists, attempting to crossfeed the opposite channels just cancels out a large amount of the signal already present in the channel.

Surely this kind of recording phase must cause problems for broadcasters of this music?? Is there any kind of Recording Industry Standard for arranging stereo separation and phasing on CD recordings? (this is probably a stupid and rhetorical question... ) By the same token, how do recording engineers assure a consistent spectral performance? Or does the title "Golden Ear" endow the owner of the right to claim correctness in whatever he/she produces?? In other words is there a Recording Industry "Standard Ear"???

In the absence of affirmative answers to these questions, how is one to ever set up a high quality audio system and get consistent results? Or is "good enough for most people" the level of sufficient quality to which the industry aspires? [I'm really not trying to be sarcastic here... these are solid questions that audiophiles must be wrestling with?]

-- DavidMcClain - 03 Jan 2004

Hi David I may be misunderstanding the problem but as far as broadcasters are concerned they transmit stereo only (and check for mono compatibility as well). If the end user/listener chooses to put the signal through a headphone enhancer or any other type of processor, that's not the broadcasters problem. If the broadcast is in dolby stereo/ surround (4 channels encoded into 2) they must be aware that any sound sent to the surround channel will completely disappear when listened to in mono, so they must make sure that no sounds, that are too essential to the sound track, are sent there. 5.1 and the other surround formats have isolated channels and this problem disappears.

As far as sound "correctness" is concerned, well there is none. As soon as you put sound through a mic and speaker you've lost correctness and that's before you consider that both the mic and the speaker are in two different rooms. What's important is that it sounds clear and good. But good is an opinion although fashion and familiarity can add some form of consistency. For example, when I used to engineer for a music studio, some of the clients were from Asia. I would mix with a very woolly sounding vocal which to me sounded wrong, but it was what they wanted to hear. If you get a chance to listen to the old Bollywood films you'll hear a different type of sound. Now days if we do commercials for Germany we have to compress them far more than we would for anywhere else. Some people loved the Phil Spector "wall of sound" and others hated it. Its almost all just opinion and taste.

-- PeteJohnston - 04 Jan 2004

Hi Pete,

Thanks for your feedback... I see that I'm approaching the subject as a scientist again, and not as one involved in aesthetics. I need to pinch myself whenever I go off in that direction with Kyma!

Cheers,

-- DavidMcClain

 
 
© 2003-2014 by the contributing authors. / You are TWikiGuest