Difference between revisions of "Dev:Sound Layer"

From Synfig Studio :: Documentation
Jump to: navigation, search
(my meaning about sound layer)
(Sound Layer)
Line 44: Line 44:
  
 
I think nice implementation of whole sound system will be adding method like get_color or accelerated_render to render sound to layer class. It allows easily write new plug-ins and extend exists. About extracting values I suppose to make low-pass filter which pushes low-passed amplify to own parameter. So you can export this parameter and reuse it as any other exported value. One nice example of using sound is sound-driven camera shake for action battle shot. Where camera shakes reacting to explosions. --[[User:AkhIL|AkhIL]] 02:07, 2 May 2008 (EDT)
 
I think nice implementation of whole sound system will be adding method like get_color or accelerated_render to render sound to layer class. It allows easily write new plug-ins and extend exists. About extracting values I suppose to make low-pass filter which pushes low-passed amplify to own parameter. So you can export this parameter and reuse it as any other exported value. One nice example of using sound is sound-driven camera shake for action battle shot. Where camera shakes reacting to explosions. --[[User:AkhIL|AkhIL]] 02:07, 2 May 2008 (EDT)
 +
 +
I suppose to use [http://en.wikipedia.org/wiki/Ambisonics ambisonics] as format to process sound. In this case sound become channel independent. It allows to localize sound for any sound system setup (headphones, quadro, 5.1 or even sound-field) from single three dimensional ambisonics data. --[[User:AkhIL|AkhIL]] 03:50, 2 May 2008 (EDT)
  
 
Another think is making completely new stuff - Actions. Or key-frame clips. For example you have sound file which starts at 0s and ends at 5s. And you have two key-frames which represents time positions in sound file. Both key-frames grouped as action. Now you can cut this action in 3rd second as in NLE and synfig will automatically create two new key-frames and one more action.
 
Another think is making completely new stuff - Actions. Or key-frame clips. For example you have sound file which starts at 0s and ends at 5s. And you have two key-frames which represents time positions in sound file. Both key-frames grouped as action. Now you can cut this action in 3rd second as in NLE and synfig will automatically create two new key-frames and one more action.

Revision as of 09:50, 2 May 2008

(This is a discussion page. If the sound is finally implemented in this way, the content of

this page should go to the "Discuss this page" area and the layer description to replace it.)

Introduction

As it has been reported in the FAQ section, the sound is not enabled for the moment in Synfig neither in Synfig Studio.

The main reasons for that seems to be due to the fact that the current code lies in a sound library called FMOD (its current code is disabled in the linux version) and that it has not correspondence code to windows version. (I don't know if Mac version have any).

As pabs commented in the IRC channel, synfig wouldn't be distributable if linked to FMOD libraries. Probably the FMOD license is not compatible with GPL Synfig license.

00:17 < pabs3> for the sound stuff, we do need to figure out what sound code 
we have atm, and figure which sound playback api to use. I'm thinking gstreamer
or openal maybe
00:18 < pabs3> synfig linked to fmod would not be distributable

Said that, and assuming that a sound interface is implemented in the future inside synfig & synfigstudio (using gstreamer libraries for example), this wiki page wants to assert some regular animator user needs for the interface and behavior of the sound system in Synfig & Synfig Studio.

Sound Layer

Then main idea is that the sound can be inserted into the animation like any of the other layers. Here are some rules that the Sound Layer should meet:

  1. A Synfig document can have several Sound Layers.
  2. Each sound Layer must have one sound file associated (and only one). The associated sound file is and remains external to the sif file. It means that it is not possible to carry with the sound in a single file (almost with the current sif file format) so they work like external image files. Anyway the sound file reference can be animatable. It means that you can change the sound file by the time. This would allow use several portions of recorded voices without need to make a whole record in a single file.
  3. The Sound Layer scope is the canvas, exactly the same as any other layer in Synfig. It means that when you import a sif file into other sif file it should carry its own sound refereced files. Also opening a canvas in its own window and play it would produce only the sounds that are inside that canvas.
  4. The parameters of the Sound Layer ca be those:
    • Frequency Gain: Default to be 1.0. It is the gain of frequency relative to of the original sound file frequency. If the Frequency Gain is bigger than 1.0 then the file is played at more speed that its natural frequency. If the Frequency Gain is lower than 1.0 the file is played a lower speed. (This is not needed as this can be achieved using the Time Loop Layer as described below --Dmd 15:37, 8 February 2008 (EST))
    • Volume Gain: default to be 1.0. That's a filter value to increase or decrease the sound level of the sound file. It will be used to fade in/out the sounds and to perform sound mixing. Volumen Gain = 0 means to mute the sound layer. (This maps naturally to the layer Amount property, which could be renamed though for the Sound Layer. On the other hand due to the logarithmic nature of volume there may in fact be a gain parameter, but with 0 default, comparable to zoom. --Dmd 05:23, 9 February 2008 (EST))
    • Start Time: Default to be 0f. It is the internal start time of the sound file. Maybe someone can think on a Duration parameter or an End Time parameter. Take in consideration that the whole sound file should be explored to know what's the file length or its duration. To change Duration of the sound use the Time Loop Layer on top of it.
    • Channel: This parameter tells to the mixer where to blend the sound with the rest of the sound layers. It should be a real number. For normal stereo system 0.0 means both speakers at same level. A value of 1.0 would mean right speaker and a value of -1.0 the left one. It can be extended to more channels but for the moment I don't know how to set up the numbers (have you any suggestion?). Maybe it can be a list of channles and the gain value for each one. The number of channels should be taken from the general options of the file. (I'd prefer position, as described next --Dmd 15:37, 8 February 2008 (EST))
    • Position: A 2D (or even 3D) vector describing the position of the sound source. This can be animated in a natural (visible) way, linked etc. The sound engine will render the sound into whatever audio format it supports. It would also be able to render Doppler effects depending on the speed of movement.
  5. Sound layer doesn't produce any visual render. Also Sound Layer is not affected by any other Layer with following exceptions:
    • Time Loop Layer: This layer would affect to the Sound Layer. It would produce the same effect as it produce to a visual layer. Loop from Link Time to Destination time a duration of Duration. See Time Loop for more details. Can be used to produce repeteable music loops, to speed up or down the sound or even to reverse it. It depends on the Time Loop parameters and its waypoints or convert types.
    • Duplicate Layer: This layer would affect to the Sound Layer too. It would produce a duplicated of the Sound Layer like any other existing layer. Can be used to produce echoes or other nice effects.
    • Paste Canvas Layer: When a Sound layer is inside a Paste Canvas Layer it is affected by its Offset Time. The effect is the same as for any other layer inside the Paste Canvas Layer. The Zoom parameter can affect also to the Sound Layer by increasing or decreasing its Volume Gain paramter from its Value Base. (the zoom thing sounds unintuitive to me, see paramer Position for an alternative to be zoomed --Dmd 15:37, 8 February 2008 (EST))
  6. The graph panel should display the current sound wave (after apply all the rest of modifier parameters when the file parameter is selected. It would allow sync animations to sounds.

Exactly as there are video effect layers (e. g. Blur Layer), sophisticated sound effect layers can be added to manipulate a plain Sound Layer. Ideas:

  • Frequency Filter Layer
  • Reverb Layer

Please feel free to add comments or more things to this feature request page. It is UNDER CONSTRUCTION ;)

I would like to have the ability to extract values out of a sound file, which enable to make the light / brightness pulse with the same intensity like the volume of the music / sounds. This could be made with an external program, which extracts the time/volume levels into a csv-file, which can be read in by synfig. I need only make the brightness of my picture or single colors of this layer in synfigstudio a placeholder, which can take the (8bit) csv-values at render time. disco! --SvH 14:09, 1 May 2008 (EDT)

I think nice implementation of whole sound system will be adding method like get_color or accelerated_render to render sound to layer class. It allows easily write new plug-ins and extend exists. About extracting values I suppose to make low-pass filter which pushes low-passed amplify to own parameter. So you can export this parameter and reuse it as any other exported value. One nice example of using sound is sound-driven camera shake for action battle shot. Where camera shakes reacting to explosions. --AkhIL 02:07, 2 May 2008 (EDT)

I suppose to use ambisonics as format to process sound. In this case sound become channel independent. It allows to localize sound for any sound system setup (headphones, quadro, 5.1 or even sound-field) from single three dimensional ambisonics data. --AkhIL 03:50, 2 May 2008 (EDT)

Another think is making completely new stuff - Actions. Or key-frame clips. For example you have sound file which starts at 0s and ends at 5s. And you have two key-frames which represents time positions in sound file. Both key-frames grouped as action. Now you can cut this action in 3rd second as in NLE and synfig will automatically create two new key-frames and one more action. Now you can drag two actions (audio clips) in time.

This feature can be implemented as new time-loop like layer. It allows to do same cutting for other layers. For example for image sequences. --AkhIL 02:07, 2 May 2008 (EDT)