On Tue, Jun 16, 2009 at 6:45 PM, Ondrej Certik<ondrej at certik.cz>
wrote:> Hi,
>
> what is the best way to go about mixing effects when joining two
> videos, like crossfading?
>
> Once I have the individual images as numpy arrays, the mixing itself
> is the easy part (I'll just use numpy + scipy for that, or any other
> python lib). However, it's not clear to me how (and especially when)
> to handle decoding and encoding properly.
>
> So lets say I create a video tutorial (screencast) and I have 3 ogv
> files. Now I want to join them --- so if I want some mixing effect,
> one way is to decode them, mix them + join them and then encode it as
> one video. I can do that already. But every decoding and encoding
> makes the image a little worse (am I right?), so what is the usual
> practise?
The recommended technique is to initially encode to lossless (if
possible; usually for shorter material) or near lossless (i.e iframe
dirac at zillion mbit/sec) and only do low quality transcodes at the
final rate when everything is done.
Failing that you don't have many other options. You can try to do some
fancy stream splicing so that the unmixxed parts take no loss.
> Another question is about frame rate --- (e.g. one frame rate for my
> web camera stuff and another for the screencast) I read that theora
> can join them,
Chained streams do not work in many players, unfortunately.
What you can do is make a stream that operates at the least common
multiple of the two rates, then emits duplicate frames for the the
slower parts which are then encoded and decoded efficiently.
> so that's fine, but if I want some mixing effects? And
> if I want to upload to youtube --- they will convert it to some other
> format, will it still work?
Last I checked you couldn't upload Ogg/Theora to Youtube. I have no
clue how they would handle variable frame rate streams.