Hi all,> One problem I see with your proposed implementation is that it doesnot> specify how audio and video would be synchronized, since there are no > absolute time stamps for the video frames. So it is difficult toaccount> for > audio latency on different systems, much like when you play an AVIfile.> Ideally it would be wonderful to have timestamps that could be used to > synchronize audio and video in a meaningful way, and if I understandit> correctly the proposed granulepos formats do not address this.That's true. On the other hand the timing information is handled by the codec layer and not on the Ogg layer. So it is sufficient if the codec can reconstruct the time information. Anyway it would be more convenient to have the same time base. In Ogg DSF I convert samples/frames to a reference time to get them in sync. Therefore the number of frames/sec is stored in the header as the number of samples in "vi.rate"> In Ogg, things like the duration of the whole video could come in the > first > page (you could send -1 for live transmissions, for example).There is no need to store the length. If it is just one stream you can go to the end of the file, take the last "granulepos". You have just to convert it in to a meaningful unit like time. Regards, Tobias Waldvogel <p>--- >8 ---- List archives: http://www.xiph.org/archives/ Ogg project homepage: http://www.xiph.org/ogg/ To unsubscribe from this list, send a message to 'theora-dev-request@xiph.org' containing only the word 'unsubscribe' in the body. No subject is needed. Unsubscribe messages sent to the list will be ignored/filtered.
> That's true. On the other hand the timing information is handled by the > codec layer and not on the Ogg layer. So it is sufficient if the codec > can reconstruct the time information. Anyway it would be more convenient > to have the same time base. > In Ogg DSF I convert samples/frames to a reference time to get them in > sync. Therefore the number of frames/sec is stored in the header as the > number of samples in "vi.rate"Great. So what you are saying is that once you know the frame rate for the default video you just pass it in the header, and each granulepos indicates a frame number. Then the codec reconstructs the absolute time by multiplying the frame number by the specified rate. It should work fine, unless you want to add variable times for each video frame, which we probably don't ( I don't, at least :) ) And what you do when you want to invoke a seek operation? I assume you seek on the audio, and starts getting video data (discarded until it reaches a keyframe.) True? BTW, I looked at the VP3 code and the codec can tell if a frame is a keyframe or not without any extra header information. Is it the case to simply mark each sample incrementally (option 1 as suggested previously by Monty) and let the higher layers take care of seeking until they find a keyframe (or cache their position as they pass?)> There is no need to store the length. If it is just one stream you can > go to the end of the file, take the last "granulepos". You have just to > convert it in to a meaningful unit like time.Of course, thanks! This addresses the problem for file based access (or fast seeking media), my main concerns. So it looks it can be done with a simple header page indicating frame rate and video dimensions, assuming we do not use variable frame rates. Regards, Mauricio Piacentini Tabuleiro --- >8 ---- List archives: http://www.xiph.org/archives/ Ogg project homepage: http://www.xiph.org/ogg/ To unsubscribe from this list, send a message to 'theora-dev-request@xiph.org' containing only the word 'unsubscribe' in the body. No subject is needed. Unsubscribe messages sent to the list will be ignored/filtered.
> > One problem I see with your proposed implementation is that it does not > > specify how audio and video would be synchronized, since there are no > > absolute time stamps for the video frames.Yes, there are absolute timestamps. Did you read the whole thing?> > So it is difficult to account > > for audio latency on different systems, much like when you play an AVI > > file. > > Ideally it would be wonderful to have timestamps that could be > > used to synchronize audio and video in a meaningful way, and if I > > understand it correctly the proposed granulepos formats do not address > > this.They certainly do. The whole point is not carving a new timebase system into the lowest Ogg layers every time a code appears with a system that doesn't fit. All one needs to do is ask a codec 'I have this granulepos, convert it to an absolute timestamp, please'. This aspect of a codec is required. The stream handlers will always know absolute time position. And it will still work in the future even if we have a video codec with no concept of fixed framerate. [You're even welcome to just use the granulepos as a straight timestamp in future codecs if you want... but that's specific to the codec]> > In Ogg, things like the duration of the whole video could come in the > > first > > page (you could send -1 for live transmissions, for example)....or we can have the system we have now, which always works, is easy to use, and doesn't require supporting an additional optional mechanism that adds no functionality. Monty --- >8 ---- List archives: http://www.xiph.org/archives/ Ogg project homepage: http://www.xiph.org/ogg/ To unsubscribe from this list, send a message to 'theora-dev-request@xiph.org' containing only the word 'unsubscribe' in the body. No subject is needed. Unsubscribe messages sent to the list will be ignored/filtered.