On Wed, Nov 09, 2005 at 10:52:53AM +0800, illiminable
wrote:>
> "fourcc" 's of rgb types
> http://www.fourcc.org/rgb.php
I'm prowd to say that http://wiki.xiph.org/OggRGB supports all of these
losslessly as well as PNG for non-indexed bitmaps. These are video formats,
correct? Not just single frames?
> raw yuv formats only
> http://www.fourcc.org/yuv.php
Wow, ok, this is interesting. I don't know what to even call, ie,
"YVU9".
> Enumeration of actual types that are used in directshow (bottom of page)
>
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/wcedshow/html/_wcesdk_directshow_media_types.asp
>
> Descriptions of the common yuv types used in windows
>
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnwmt/html/YUVFormats.asp
These, assuming they're a subset of the above URLs, I'm not really
interested
in. If DirectShow can't use a format exported by a video codec plugin,
that's
perfectly OK, it can either use another plugin within OggStream to convert it to
something it /can/ use or it can simply not support that codec..
This is of course assuming Windows will even end up using OggStream. It'd
certainly reduce the workload of the DirectShow filter writters, and maintain a
more consistant compatability across platforms, but my focus is GNU/Linux.
> >Just because the codec supports it, doesn't mean that every
application which
> >uses the codec must support all the possibilities. By making the data
> >definition generic we allow more re-used code (ie, for colorspace
converters)
> >and prevent the "raw codec" sprawl you above described with
FourCC codecs.
>
>
> Well thats the thing about raw codecs, if you use the types that are
> supported by hardware and/or other codecs, you don't have to do
anything
> except copy the memory. When you start using things that no hardware
> generates, or can display directly, that's when you have to write code
to
> convert.
Um, last I knew, video cards don't take YUV data as-is. It's converted
on some
level, and it's not just an issue of decoding from the codec plugin and
throwing
it to the video hardware.
Furthermore, we aren't going to put artificial limitations on future video
codecs based on current hardware or the limitations of obsolete software. If a
codec finds it useful to encode, say, super high definition interlaced video
which encodes one chroma for every luma on every other line, such that one
interlaced scan includes chroma and the next doesn't, then it should be able
to
decode to the same raw YUV spec as everyone else..
.. and if another codec supports that YUV layout, or hardware, etc then it
should be able to, directly. If it doesn't, the maker of that codec is
likely
to also write a deinterlacer plugin which also does the chroma resampling to
4:4:4 (ie, copying the chroma to 2 scans), and everyone can use that.
This is roughly the same if the codec plugin couldn't export it's custom
format
by itself, but was forced to deinterlace and convert to 4:4:4 on decode, except
that if the application DOES support it, it can use it directly.
I'm definetly leaning toward a nearly universal YUV format, myself. I'm
seeing
no reason to do otherwise, and many, many advantages in the long run. It hurts
nothing if only a handful of configurations are actually used, after all.
> All i'm saying, is there are certain types that are defined, hardware
can
> display them, hardware like cameras generate them.
But that's the difference between Microsoft and Xiph. They write things and
expect to write something new in 12-18 months to replace it, we write things
with the intention that they be used more than 10 years in the future.
They write things with fixed values and artificial limitations, simply because
it seems easier or it's easier for them to debug, where we write things
open-ended and flexible such that we can continue to improve them over time.
They would do something as assinine as designing a special codec for every
individual different format for raw data, with very little flexibility in each
one, where we write one codec to cover all of them and much, much more.
If we wrote things like Microsoft does, we would be on Vorbis IX by now, not
just beginning to talk about Vorbis II, and people would either have difficulty
finding Vorbis I/II/III codec plugins or they'd be supplied in the Vorbis IX
codec plugin which would be several times the size it really needed to be.
:-)
Thinking about this has certainly clarified my position on the different things
we've talked about, and yea, I'm backtracking to the universal format,
because
it's what feels right for Xiph to do. It doesn't matter if 95%+ of the
different combinations never end up getting used, they are there if they're
needed, and code for converter plugins can be more efficiently reused.
--
The recognition of individual possibility,
to allow each to be what she and he can be,
rests inherently upon the availability of knowledge;
The perpetuation of ignorance is the beginning of slavery.
from "Die Gedanken Sind Frei": Free Software and the Struggle for Free
Thought
by Eben Moglen, General council of the Free Software Foundation