k y m a • t w e a k y — the kyma collective || view the current website || February 2015 Archive

/ WebHome / Glossary / Learn.FunctionalInterationSynthesis

Search


Learn Section


Home 
Topics 
More... 

All Sections


Products
Order
Company
Community
Share
Learn

Login / Register 
Change password
Forgot password?
%SESSION_IF_AUTHENTICATED% Site Map
%SESSION_ENDIF%

Symbolic Sound


Home
Kyma Forum
Eighth Nerve

TWiki Links


%SESSION_IF_AUTHENTICATED% TWiki Shorthand
TWiki Formatting FAQ
What is TWiki?
%SESSION_ENDIF% twiki.org

Functional Iteration Synthesis

To iterate a function is to apply some transformation, f, to a datum, x(0), and to apply the transformation again to the first resulting value, and then again to the second resulting value, ... and so on, n times again:

x(n) = f (x(n-1)) 
We call x(n) the nth "iterate" obtained by applying f to x(0). If f is nonlinear (e.g. a sine, a line broken in several segments, or any high-order polynomial) the process will determine different sequences of results. The particular sequence is dependent on the initial datum, x(0) and on the parameters of the function f. In most cases, it is impossible to say what will be the output series of values. To translate this general procedure into a digital sound synthesis method, we can follow these steps:
a - initialize x(0) and f's parameters 
b - take the nth iterate, x(n), and save it as the current digital sample 
c - update x(0) and f's parameters 
d - repeat b and c as many times as the samples required. 
In other words: the output stream of samples is the series of the nth iterates of f upon changing values in x(0) and f. If we call i the sample order index (discrete time index), the synthesis technique can be represented as a simple recursive formula:
x(n,i) = f(i) (x(n-1,i)) 
This is model framework, representing more a class of synthesis techniques than a single technique. The aspect peculiar to all particular cases in the class lays in the fact that the stream of samples is calculated as the sequence of the nth iterates of some function. To implement a particular technique, we must select a particular nonlinear function and run a given number of iterations.

As an example, think of the technique known as waveshaping (often called nonlinear distorsion in Europe): it involves the transformation of an input signal by a waveshaper function (usually a Chebishev polynomial, but can be a sine wave or other). If the operation is repeated, feeding the output sample back into the waveshaper, we get a special case of iterated function synthesis.

Obviously, every nonlinear function determines a peculiar process of its own. However, in the literature on theory of deterministic chaos, many have stressed that the numerical sequences obtained with iterated functions are more heavily dependent on the mere fact of the iteration itself, than on the function: it is the iteration that allows for coherent or chaotic patterns to emerge, not the nonlinear function being iterated.

Consider the following iteration:

x(n) = sin (r*x(n-1)) 
i.e. the mapping of the sine function onto itself in the interval [-1,1]. The parameters in play include: r (or control parameter) and x(0) (initial datum). As a general rule, the control parameter ranges from 0 to 4, but as a matter of fact only values between 3.14 and 4 are of practical relevance for us (smaller values cause the process to move towards a "fixed-point attractor", yielding a straight line as a result). The initial datum can be assigned any value from the interval [-1,1] (but in theory it can be any real number). What is the effect of the parameters? The control parameter, r, determines the overall timbral quality in the output sound. On the other hand, x(0) determines the particular series of output sample values, and is somewhat akin to the seed of a random numbers generator. Indeed, you can think of this synthesis process as a generator of "structured noise" that we can control in its internal process. The achievable results, anyway, are far more varied and rich than the results obtained with white noise generators.

To create some sound, either x(0) or r (or both) have to move across their value range. This can be done driving them with an envelope generator or with an oscillator. By varying x(0) during the note, and keeping r fixed, we obtain several sounds (several different output signals) of very similar timbral properties. Viceversa, by varying r and keeping x(0) constant, we obtain different sounds starting from identical initial data. By varying both r and x(0), we obtain sounds of highly dynamical properties.

Another crucial parameter is the particular iterate utilized as the digital sample. In general, the higher is the iterate order, n, and the more chaotic and turbulent the output sound. A too high iterate would generate something very close to white noise, but internally articulated and not as static. A too low iterate would probably generate some silence.

Now, what would be a good way of changing r and/or x(0) in time? If we use ramps, i.e.series of linearly increasing or decreasing values, we obtain acoustical turbulences, sometime having a wind-like, or even water-like quality for the ear. If you look to the waveform of such sounds, you will notice phase- or frequency-modulation effects, which sometimes may be heard as distinct gestures separated by silent pauses (the pauses are chunks of direct current signal, with either a positive or negative offset). That happens especially when a small number of iterates is being used, and when r is close to 3.14. These strange rumbles are as the "natural state" of the mathematical model. Indeed by simply ramping either r or x(0), we are "visiting" the whole field of possibilities implicit to the iterated function.

From this page, http://xoomer.virgilio.it/adiscipi/FIS.htm, you find examples of Functional Iteration Synthesis (using the sine map) in Csound, and the Kyma implementation (with the IteratedWaveshaper sound). Enjoy.


See also Kyma Sound Library|Synthesis-Backgrounds & Pads|Distortion synthesis.kyma|Glitchy ambience from iterated waveshaper. This Sound uses the InputOutputCharacteristic to do the waveshaping.

 
 
© 2003-2014 by the contributing authors. / You are TWikiGuest