Myers W. Carpenter
2000-May-15 13:24 UTC
[vorbis] Re: [vorbis-dev] Semi-off-topic ramblings
[cc'd to vorbis and gstreamer-devel because I thought both lists would like to see this ]> I'm curious if anyone else is at all fed up with the current > state-of-affairs of media support under *IX. As things stand it's rather[...] Hi, Caught your post on vorbis-dev, but I'm not on vorbis, where Monty suggested this go, so email me directly with any replies. I would be another person fed up with the media support under linux. I started on a project I was calling libAV, right after coming up with the name and setting up a sourceforge account I ran across the Canvas project <http://canvas.linuxpower.org/>. They wanted to get something out fast, using code out there. More wandering it the web brought up something a lot of projects with somewhat the same goals but they were either dead or had no code yet or both (gmedia, Gnome Media Framework, projector). Then I found something a lot more promising: gstreamer <http://gstreamer.sourceforge.net>. gstreamer's main author is Omega Hacker, who fingers I keep finding in more and more linux media pies. He works as a programmer in a The whole idea behind gstreamer is similar to DirectShow on Windows, and Be Media Kit. To use the gstreamer you have a "source" (file/rtp stream/video capture/etc) and you have a "sink" (video display region/file/rtp stream/audio visualizer/etc) and you hook up filter inbetween the two that modifies the data in an approprate way. This makes the whole system really flexable. Once this is done you could, with very little effort spent (and mostly on your user interface), write a xmms-work-alike, a video confrencing app, a movie player, a vcr-work-alike (recording tv to disk or heck just stream it over to your friends in the third world :), a movie player, a movie converter, and more. GStreamer used to be Gnome Streamer (still on the top of all their files in CVS), but I think they are no longer tieing themselves to Gnome. For now they are tied to GTK because they use it for it's object model, but as of GLib 1.4 there will be an object model GLib and GStreamer will loose that dependancy (meaning you won't need an X server to run this). Since decoders and encoders are just filters in the system and all filters (so far) are shared libraries they could easily be binary only. Now how easy it will be to convice Apple/M$/(Sur)Real to relase something like that I think will be hard. Another idea is to use winelib to access the Windoze DLL's need to do this. If you can't get realtime speed, at least you could covert to a ogg file (once we have a video codec) or mpeg1. Have a look at the project and see if this is what you are looking for. And if some one wants to write an ogg and vorbis filter that would be cool too. :) myers. -- You're just jealous because the voices only talk to me. --- >8 ---- List archives: http://www.xiph.org/archives/ Ogg project homepage: http://www.xiph.org/ogg/ To unsubscribe from this list, send a message to 'vorbis-request@xiph.org' containing only the word 'unsubscribe' in the body. No subject is needed. Unsubscribe messages sent to the list will be ignored/filtered.
On 15 May 2000 16:24:10 Myers W. Carpenter <myers@fil.org> wrote:> To use the gstreamer you have a "source" (file/rtp stream/video > capture/etc) and you have a "sink" (video display region/file/rtp > stream/audio visualizer/etc) and you hook up filter inbetween the two > that modifies the data in an approprate way. This makes the whole > system really flexable.Yes, this is very much what I'm looking for. The idea is quite similar to the one I already had and was non-commitedly working on. Here is a list of features I think such a system would require in order to be successful and useful: 1) Ability to handle arbitrary amounts of sequential data while retaining seekability: In this way the system can either be spoon-fed small blocks of data (for network streaming) or handle large blocks of data (most likely using memory mapped I/O) It should also be possible to give the system a position to seek to in the stream and have the number of bytes to forward or backward step to get to that position. 2) Identification of framing formats and codecs: I believe this can be done through the input function. Every framing module and decoder module should have a function built-in to accept enough data to determine whether or not it is able to handle the input format. If there is no module available to process a given format (that is, every module has attempted to identify the stream and determined that it cannot process it) then the input function will simply die. 3) Seamless integration of all modules: One feature I would very much like to have is the ability to tell it what output modules I want to use then just give it the data and let it go from there. A simple looped function call which reads in data from the source and enters it into the media system would be all the code a complete player would require. The system would function something like this: Video Codec -> Post Processing -> Video Output / Source -> Framing \ Audio Codec -> Post Processing -> Audio Output The desired video output and audio output modules and any post processing would be specified before the actual stream decoding begins. From there the end-user application would start feeding the system data, from which the framing format would be identified on-the-fly. From here the video and audio codecs would be similarly identified on-the-fly. The codecs would accept and buffer data until they have enough to pass on to the output modules (i.e. a video codec would wait until it could decode at least one entire frame before passing it on to the video output module) In this way the end-user application becomes insanely simple. Pseudocode follows... while(successfully_read_data_from_stream(inbuffer)) { add_media_data(inbuffer); check_for_user_io_events(); } The linkages shown between the modules could be implemented as function callbacks, or possibly calls to add data to queues which are then read by separate threads. Notice the framing is kept seperate from the codecs. This is one thing many media systems (most notably Windows) fail to do. I brought this up before, but it seems only logical that if a module is present to read the given framing format, then regardless of what that format is if a codec is present to decode the component streams it should be able to do so without being tied to a specific framing type. 4) Encoding and transcoding abilities It would also be nice if the system could convert data back to coded form. The simplest case would be encoding a single stream... Source -> Encoder -> Output However, it should also be possible to encode the components of video and audio streams and interleave them as a framed datastream... Video Source -> Video Encoder \ Interleaving module -> Output / Audio Source -> Audio Encoder Or even use both encoding and decoding functions to set up a complete transcoding system... Video Decoder -> Processing -> Video Encoder / \ Source -> Framing Framing \ / Audio Decoder -> Processing -> Audio Encoder Is this more features than such a system needs? I think if something like it were to exist it would be wonderful for the authors of end-user applications. It would also be hell for the people who write it. And it seems like with working systems already established it may simply be a waste of time. Any ideas on the matter? Tony Arcieri --- >8 ---- List archives: http://www.xiph.org/archives/ Ogg project homepage: http://www.xiph.org/ogg/ To unsubscribe from this list, send a message to 'vorbis-request@xiph.org' containing only the word 'unsubscribe' in the body. No subject is needed. Unsubscribe messages sent to the list will be ignored/filtered.