While I understand everything at Oracle is "top secret" these days. Does anyone have any insight into a next-gen X4500 / X4540? Does some other Oracle / Sun partner make a comparable system that is fully supported by Oracle / Sun? http://www.oracle.com/us/products/servers-storage/servers/previous-products/index.html What do X4500 / X4540 owners use if they''d like more comparable zfs based storage and full Oracle support? I''m aware of Nexenta and other cloned products but am specifically asking about Oracle supported hardware. However, does anyone know if these type of vendors will be at NAB this year? I''d like to talk to a few if they are... -- Thank you, Chris Banal
On 4/7/2011 10:25 AM, Chris Banal wrote:> While I understand everything at Oracle is "top secret" these days. > > Does anyone have any insight into a next-gen X4500 / X4540? Does some > other Oracle / Sun partner make a comparable system that is fully > supported by Oracle / Sun? > > http://www.oracle.com/us/products/servers-storage/servers/previous-products/index.html > > > What do X4500 / X4540 owners use if they''d like more comparable zfs > based storage and full Oracle support? > > I''m aware of Nexenta and other cloned products but am specifically > asking about Oracle supported hardware. However, does anyone know if > these type of vendors will be at NAB this year? I''d like to talk to a > few if they are... >The move seems to be to the Unified Storage (aka ZFS Storage) line, which is a successor to the 7000-series OpenStorage stuff. http://www.oracle.com/us/products/servers-storage/storage/unified-storage/index.html -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA
On 04/ 8/11 06:30 PM, Erik Trimble wrote:> On 4/7/2011 10:25 AM, Chris Banal wrote: >> While I understand everything at Oracle is "top secret" these days. >> >> Does anyone have any insight into a next-gen X4500 / X4540? Does some >> other Oracle / Sun partner make a comparable system that is fully >> supported by Oracle / Sun? >> >> http://www.oracle.com/us/products/servers-storage/servers/previous-products/index.html >> >> >> What do X4500 / X4540 owners use if they''d like more comparable zfs >> based storage and full Oracle support? >> >> I''m aware of Nexenta and other cloned products but am specifically >> asking about Oracle supported hardware. However, does anyone know if >> these type of vendors will be at NAB this year? I''d like to talk to a >> few if they are... >> > > The move seems to be to the Unified Storage (aka ZFS Storage) line, > which is a successor to the 7000-series OpenStorage stuff. > > http://www.oracle.com/us/products/servers-storage/storage/unified-storage/index.html > >Which is not a lot of use to those of us who use X4540s for what they were intended: storage appliances. We have had to take the retrograde step of adding more, smaller servers (like the ones we consolidated on the X4540s!). -- Ian.
On Apr 8, 2011, at 2:37 AM, Ian Collins <ian at ianshome.com> wrote:> On 04/ 8/11 06:30 PM, Erik Trimble wrote: >> On 4/7/2011 10:25 AM, Chris Banal wrote: >>> While I understand everything at Oracle is "top secret" these days. >>> >>> Does anyone have any insight into a next-gen X4500 / X4540? Does some other Oracle / Sun partner make a comparable system that is fully supported by Oracle / Sun? >>> >>> http://www.oracle.com/us/products/servers-storage/servers/previous-products/index.html >>> >>> What do X4500 / X4540 owners use if they''d like more comparable zfs based storage and full Oracle support? >>> >>> I''m aware of Nexenta and other cloned products but am specifically asking about Oracle supported hardware. However, does anyone know if these type of vendors will be at NAB this year? I''d like to talk to a few if they are... >>> >> >> The move seems to be to the Unified Storage (aka ZFS Storage) line, which is a successor to the 7000-series OpenStorage stuff. >> >> http://www.oracle.com/us/products/servers-storage/storage/unified-storage/index.html >> > Which is not a lot of use to those of us who use X4540s for what they were intended: storage appliances.Can you elaborate briefly on what exactly the problem is? I don''t follow? What else would an X4540 or a 7xxx box be used for, other than a storage appliance? Guess I''m slow. :-) Mark
On 4/8/2011 12:37 AM, Ian Collins wrote:> On 04/ 8/11 06:30 PM, Erik Trimble wrote: >> On 4/7/2011 10:25 AM, Chris Banal wrote: >>> While I understand everything at Oracle is "top secret" these days. >>> >>> Does anyone have any insight into a next-gen X4500 / X4540? Does >>> some other Oracle / Sun partner make a comparable system that is >>> fully supported by Oracle / Sun? >>> >>> http://www.oracle.com/us/products/servers-storage/servers/previous-products/index.html >>> >>> >>> What do X4500 / X4540 owners use if they''d like more comparable zfs >>> based storage and full Oracle support? >>> >>> I''m aware of Nexenta and other cloned products but am specifically >>> asking about Oracle supported hardware. However, does anyone know if >>> these type of vendors will be at NAB this year? I''d like to talk to >>> a few if they are... >>> >> >> The move seems to be to the Unified Storage (aka ZFS Storage) line, >> which is a successor to the 7000-series OpenStorage stuff. >> >> http://www.oracle.com/us/products/servers-storage/storage/unified-storage/index.html >> >> > Which is not a lot of use to those of us who use X4540s for what they > were intended: storage appliances. > > We have had to take the retrograde step of adding more, smaller > servers (like the ones we consolidated on the X4540s!). >Sorry, I read the question differently, as in "I have X4500/X4540 now, and want more of them, but Oracle doesn''t sell them anymore, what can I buy?". The 7000-series (now: Unified Storage) *are* storage appliances. If you have an X4540/X4500 (and some cash burning a hole in your pocket), Oracle will be happy to sell you a support license (which should include later versions of ZFS software). But, don''t quote me on that - talk to a Sales Rep if you want a Quote. <wink> -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA
On 04/ 8/11 08:08 PM, Mark Sandrock wrote:> On Apr 8, 2011, at 2:37 AM, Ian Collins<ian at ianshome.com> wrote: > >> On 04/ 8/11 06:30 PM, Erik Trimble wrote: >>> On 4/7/2011 10:25 AM, Chris Banal wrote: >>>> While I understand everything at Oracle is "top secret" these days. >>>> >>>> Does anyone have any insight into a next-gen X4500 / X4540? Does some other Oracle / Sun partner make a comparable system that is fully supported by Oracle / Sun? >>>> >>>> http://www.oracle.com/us/products/servers-storage/servers/previous-products/index.html >>>> >>>> What do X4500 / X4540 owners use if they''d like more comparable zfs based storage and full Oracle support? >>>> >>>> I''m aware of Nexenta and other cloned products but am specifically asking about Oracle supported hardware. However, does anyone know if these type of vendors will be at NAB this year? I''d like to talk to a few if they are... >>>> >>> The move seems to be to the Unified Storage (aka ZFS Storage) line, which is a successor to the 7000-series OpenStorage stuff. >>> >>> http://www.oracle.com/us/products/servers-storage/storage/unified-storage/index.html >>> >> Which is not a lot of use to those of us who use X4540s for what they were intended: storage appliances. > Can you elaborate briefly on what exactly the problem is? > > I don''t follow? What else would an X4540 or a 7xxx box > be used for, other than a storage appliance? > > Guess I''m slow. :-) >No, I just wasn''t clear - we use ours as storage/application servers. They run Samba, Apache and various other applications and P2V zones that access the large pool of data. Each also acts as a fail over box (both data and applications) for the other. They replaced several application servers backed by a SAN for a fraction the price of a new SAN. -- Ian.
On Apr 8, 2011, at 3:29 AM, Ian Collins <ian at ianshome.com> wrote:> On 04/ 8/11 08:08 PM, Mark Sandrock wrote: >> On Apr 8, 2011, at 2:37 AM, Ian Collins<ian at ianshome.com> wrote: >> >>> On 04/ 8/11 06:30 PM, Erik Trimble wrote: >>>> On 4/7/2011 10:25 AM, Chris Banal wrote: >>>>> While I understand everything at Oracle is "top secret" these days. >>>>> >>>>> Does anyone have any insight into a next-gen X4500 / X4540? Does some other Oracle / Sun partner make a comparable system that is fully supported by Oracle / Sun? >>>>> >>>>> http://www.oracle.com/us/products/servers-storage/servers/previous-products/index.html >>>>> >>>>> What do X4500 / X4540 owners use if they''d like more comparable zfs based storage and full Oracle support? >>>>> >>>>> I''m aware of Nexenta and other cloned products but am specifically asking about Oracle supported hardware. However, does anyone know if these type of vendors will be at NAB this year? I''d like to talk to a few if they are... >>>>> >>>> The move seems to be to the Unified Storage (aka ZFS Storage) line, which is a successor to the 7000-series OpenStorage stuff. >>>> >>>> http://www.oracle.com/us/products/servers-storage/storage/unified-storage/index.html >>>> >>> Which is not a lot of use to those of us who use X4540s for what they were intended: storage appliances. >> Can you elaborate briefly on what exactly the problem is? >> >> I don''t follow? What else would an X4540 or a 7xxx box >> be used for, other than a storage appliance? >> >> Guess I''m slow. :-) >> > No, I just wasn''t clear - we use ours as storage/application servers. They run Samba, Apache and various other applications and P2V zones that access the large pool of data. Each also acts as a fail over box (both data and applications) for the other.You have built-in storage failover with an AR cluster; and they do NFS, CIFS, iSCSI, HTTP and WebDav out of the box. And you have fairly unlimited options for application servers, once they are decoupled from the storage servers. It doesn''t seem like much of a drawback -- although it may be for some smaller sites. I see AR clusters going in in local high schools and small universities. Anything''s a fraction of the price of a SAN, isn''t it? :-) Mark> > They replaced several application servers backed by a SAN for a fraction the price of a new SAN. > > -- > Ian. > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 04/ 8/11 09:49 PM, Mark Sandrock wrote:> On Apr 8, 2011, at 3:29 AM, Ian Collins<ian at ianshome.com> wrote: > >> On 04/ 8/11 08:08 PM, Mark Sandrock wrote: >>> On Apr 8, 2011, at 2:37 AM, Ian Collins<ian at ianshome.com> wrote: >>> >>>> On 04/ 8/11 06:30 PM, Erik Trimble wrote: >>>>> The move seems to be to the Unified Storage (aka ZFS Storage) line, which is a successor to the 7000-series OpenStorage stuff. >>>>> >>>>> http://www.oracle.com/us/products/servers-storage/storage/unified-storage/index.html >>>>> >>>> Which is not a lot of use to those of us who use X4540s for what they were intended: storage appliances. >>> Can you elaborate briefly on what exactly the problem is? >>> >>> I don''t follow? What else would an X4540 or a 7xxx box >>> be used for, other than a storage appliance? >>> >>> Guess I''m slow. :-) >>> >> No, I just wasn''t clear - we use ours as storage/application servers. They run Samba, Apache and various other applications and P2V zones that access the large pool of data. Each also acts as a fail over box (both data and applications) for the other. > You have built-in storage failover with an AR cluster; > and they do NFS, CIFS, iSCSI, HTTP and WebDav > out of the box. > > And you have fairly unlimited options for application servers, > once they are decoupled from the storage servers. > > It doesn''t seem like much of a drawback -- although it > may be for some smaller sites. I see AR clusters going in > in local high schools and small universities. >Which is all fine and dandy if you have a green field, or are willing to re-architect your systems. We just wanted to add a couple more x4540s! -- Ian.
On 04/ 8/11 01:14 PM, Ian Collins wrote:>> You have built-in storage failover with an AR cluster; >> and they do NFS, CIFS, iSCSI, HTTP and WebDav >> out of the box. >> >> And you have fairly unlimited options for application servers, >> once they are decoupled from the storage servers. >> >> It doesn''t seem like much of a drawback -- although it >> may be for some smaller sites. I see AR clusters going in >> in local high schools and small universities. >> > Which is all fine and dandy if you have a green field, or are willing to > re-architect your systems. We just wanted to add a couple more x4540s! >Hi, same here, it''s a sad news that Oracle decided to stop x4540s production line. Before, ZFS geeks had choice - buy 7000 series if you want quick "out of the box" storage with nice GUI, or build your own storage with x4540 line, which by the way has brilliant engineering design, the choice is gone now. Regards,
On Fri, 8 Apr 2011, Mark Sandrock wrote:> > And you have fairly unlimited options for application servers, > once they are decoupled from the storage servers. > > It doesn''t seem like much of a drawback -- although itThe rather extreme loss of I/O performance (at least several orders of magnitude) to the application, along with increased I/O latency, seems like quite a drawback. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
On Fri, 8 Apr 2011, Erik Trimble wrote:> > Sorry, I read the question differently, as in "I have X4500/X4540 now, and > want more of them, but Oracle doesn''t sell them anymore, what can I buy?". > The 7000-series (now: Unified Storage) *are* storage appliances.They may be storage appliances, but the user can not put their own software on them. This limits the appliance to only the features that Oracle decides to put on it. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
On 08/04/2011 14:59, Bob Friesenhahn wrote:> On Fri, 8 Apr 2011, Erik Trimble wrote: >> >> Sorry, I read the question differently, as in "I have X4500/X4540 now, >> and want more of them, but Oracle doesn''t sell them anymore, what can >> I buy?". The 7000-series (now: Unified Storage) *are* storage appliances. > > They may be storage appliances, but the user can not put their own > software on them. This limits the appliance to only the features that > Oracle decides to put on it.Isn''t that the very definition of an Appliance ? -- Darren J Moffat
On Fri, April 8, 2011 10:06, Darren J Moffat wrote:>> They may be storage appliances, but the user can not put their own >> software on them. This limits the appliance to only the features that >> Oracle decides to put on it. > > Isn''t that the very definition of an Appliance ?Yes, but the OP wasn''t looking for an appliance, he were looking for a (general) server that could hold lots of disks. The X4540 was well-designed and suited their need for storage and CPU (as it did Greenplum as well); it was fairly unique as a design.
On Apr 8, 2011, at 7:50 AM, Evaldas Auryla <evaldas.auryla at edqm.eu> wrote:> On 04/ 8/11 01:14 PM, Ian Collins wrote: >>> You have built-in storage failover with an AR cluster; >>> and they do NFS, CIFS, iSCSI, HTTP and WebDav >>> out of the box. >>> >>> And you have fairly unlimited options for application servers, >>> once they are decoupled from the storage servers. >>> >>> It doesn''t seem like much of a drawback -- although it >>> may be for some smaller sites. I see AR clusters going in >>> in local high schools and small universities. >>> >> Which is all fine and dandy if you have a green field, or are willing to >> re-architect your systems. We just wanted to add a couple more x4540s! >> > > Hi, same here, it''s a sad news that Oracle decided to stop x4540s production line. Before, ZFS geeks had choice - buy 7000 series if you want quick "out of the box" storage with nice GUI, or build your own storage with x4540 line, which by the way has brilliant engineering design, the choice is gone now.Okay, so what is the great advantage of an X4540 versus X86 server plus disk array(s)? Mark
On Fri, Apr 08, 2011 at 08:29:31PM +1200, Ian Collins wrote:> On 04/ 8/11 08:08 PM, Mark Sandrock wrote:...> >I don''t follow? What else would an X4540 or a 7xxx box > >be used for, other than a storage appliance?...> No, I just wasn''t clear - we use ours as storage/application servers. > They run Samba, Apache and various other applications and P2V zones that > access the large pool of data. Each also acts as a fail over box (both > data and applications) for the other.Same thing here + several zones (source code repositories, documentation, even a real samba server to avoid the MS crap, install server, shared installs (i.e. relocatable packages shared via NFS e.g. as /local/usr ...)). So yes, 7xxx is a no-go for us as well. If there are no X45xx, we''ll find alternatives from other companies ...> >Guess I''m slow. :-)May be - flexibility/dependencies are some of the keywords ;-) Regards, jel. -- Otto-von-Guericke University http://www.cs.uni-magdeburg.de/ Department of Computer Science Geb. 29 R 027, Universitaetsplatz 2 39106 Magdeburg, Germany Tel: +49 391 67 12768
On 04/08/2011 05:20 PM, Mark Sandrock wrote:> > On Apr 8, 2011, at 7:50 AM, Evaldas Auryla <evaldas.auryla at edqm.eu> wrote: > >> On 04/ 8/11 01:14 PM, Ian Collins wrote: >>>> You have built-in storage failover with an AR cluster; >>>> and they do NFS, CIFS, iSCSI, HTTP and WebDav >>>> out of the box. >>>> >>>> And you have fairly unlimited options for application servers, >>>> once they are decoupled from the storage servers. >>>> >>>> It doesn''t seem like much of a drawback -- although it >>>> may be for some smaller sites. I see AR clusters going in >>>> in local high schools and small universities. >>>> >>> Which is all fine and dandy if you have a green field, or are willing to >>> re-architect your systems. We just wanted to add a couple more x4540s! >>> >> >> Hi, same here, it''s a sad news that Oracle decided to stop x4540s production line. Before, ZFS geeks had choice - buy 7000 series if you want quick "out of the box" storage with nice GUI, or build your own storage with x4540 line, which by the way has brilliant engineering design, the choice is gone now. > > Okay, so what is the great advantage > of an X4540 versus X86 server plus > disk array(s)? > > MarkSeveral: 1) Density: The X4540 has far greater density than 1U server + Sun''s J4200 or J4400 storage arrays. The X4540 did 12 disks / 1RU, whereas a 1U + 2xJ4400 only manages ~5.3 disks / 1RU. 2) Number of components involved: server + disk enclosure means you have more PSUs which can die on you, more cabling to accidentally disconnect and generally more hassle with installation. 3) Spare management: With the X4540 you only have to have one kind of spare component: the server. With servers + enclosures, you might need to keep multiple. I agree that besides 1), both 2) a 3) are a relatively trivial problem to solve. Of course, server + enclosure builds do have their place, such as when you might need to scale, but even then you could just hook them up to a X4540 (or purchase a new one - I never quite understood why the storage-enclosure-only variant of the X4540 case was more expensive than an identical server). In short, I think the X4540 was an elegant and powerful system that definitely had its market, especially in my area of work (digital video processing - heavy on latency, throughput and IOPS - an area, where the 7000-series with its over-the-network access would just be a totally useless brick). -- Saso
On 08/04/2011 17:47, Sa?o Kiselkov wrote:> In short, I think the X4540 was an elegant and powerful system that > definitely had its market, especially in my area of work (digital video > processing - heavy on latency, throughput and IOPS - an area, where the > 7000-series with its over-the-network access would just be a totally > useless brick).As an engineer I''m curious have you actually tried a suitably sized S7000 or are you assuming it won''t perform suitably for you ? -- Darren J Moffat
On 04/08/2011 06:59 PM, Darren J Moffat wrote:> On 08/04/2011 17:47, Sa?o Kiselkov wrote: >> In short, I think the X4540 was an elegant and powerful system that >> definitely had its market, especially in my area of work (digital video >> processing - heavy on latency, throughput and IOPS - an area, where the >> 7000-series with its over-the-network access would just be a totally >> useless brick). > > As an engineer I''m curious have you actually tried a suitably sized > S7000 or are you assuming it won''t perform suitably for you ? >No, I haven''t tried a S7000, but I''ve tried other kinds of network storage and from a design perspective, for my applications, it doesn''t even make a single bit of sense. I''m talking about high-volume real-time video streaming, where you stream 500-1000 (x 8Mbit/s) live streams from a machine over UDP. Having to go over the network to fetch the data from a different machine is kind of like building a proxy which doesn''t really do anything - if the data is available from a different machine over the network, then why the heck should I just put another machine in the processing path? For my applications, I need a machine with as few processing components between the disks and network as possible, to maximize throughput, maximize IOPS and minimize latency and jitter. Cheers, -- Saso
> No, I haven''t tried a S7000, but I''ve tried other kinds of network > storage and from a design perspective, for my applications, it doesn''t > even make a single bit of sense. I''m talking about high-volume real-time > video streaming, where you stream 500-1000 (x 8Mbit/s) live streams from > a machine over UDP. Having to go over the network to fetch the data from > a different machine is kind of like building a proxy which doesn''t > really do anything - if the data is available from a different machine > over the network, then why the heck should I just put another machine in > the processing path? For my applications, I need a machine with as few > processing components between the disks and network as possible, to > maximize throughput, maximize IOPS and minimize latency and jitter.I can''t speak for this particular situation or solution, but I think in principle you are wrong. Networks are fast. Hard drives are slow. Put a 10G connection between your storage and your front ends and you''ll have the bandwidth[1]. Actually if you really were hitting 1000x8Mbits I''d put 2, but that is just a question of scale. In a different situation I have boxes which peak at around 7 Gb/s down a 10G link (in reality I don''t need that much because it is all about the IOPS for me). That is with just twelve 15k disks. Your situation appears to be pretty ideal for storage hardware, so perfectly achievable from an appliance. I can''t speak for the S7000 range. I ignored that entire product line because when I asked about it the markup was insane compared to just buying X4500/X4540s. The price for Oracle kit isn''t remotely tenable, so the death of the X45xx range is a moot point for me anyway, since I couldn''t afford it. [1] Just in case, you also shouldn''t be adding any particularly significant latency either. Jitter, maybe, depending on the specifics of the streams involved.> SasoJulian -- Julian King Computer Officer, University of Cambridge, Unix Support
Sašo Kiselkov
2011-Apr-08 17:45 UTC
[zfs-discuss] Network video streaming [Was: Re: X4540 no next-gen product?]
On 04/08/2011 07:22 PM, J.P. King wrote:> >> No, I haven''t tried a S7000, but I''ve tried other kinds of network >> storage and from a design perspective, for my applications, it doesn''t >> even make a single bit of sense. I''m talking about high-volume real-time >> video streaming, where you stream 500-1000 (x 8Mbit/s) live streams from >> a machine over UDP. Having to go over the network to fetch the data from >> a different machine is kind of like building a proxy which doesn''t >> really do anything - if the data is available from a different machine >> over the network, then why the heck should I just put another machine in >> the processing path? For my applications, I need a machine with as few >> processing components between the disks and network as possible, to >> maximize throughput, maximize IOPS and minimize latency and jitter. > > I can''t speak for this particular situation or solution, but I think in > principle you are wrong. Networks are fast. Hard drives are slow. Put > a 10G connection between your storage and your front ends and you''ll > have the bandwidth[1]. Actually if you really were hitting 1000x8Mbits > I''d put 2, but that is just a question of scale. In a different > situation I have boxes which peak at around 7 Gb/s down a 10G link (in > reality I don''t need that much because it is all about the IOPS for > me). That is with just twelve 15k disks. Your situation appears to be > pretty ideal for storage hardware, so perfectly achievable from an > appliance.I envision this kind of scenario (using my fancy ASCII art skills :-)): || ========= streaming server ======== || +-----+ SAS +-----+ PCI-e +-----+ Ethernet +--------+ |DISKS| ===> | RAM | ====> | NIC | =======> | client | +-----+ +-----+ +-----+ +--------+ And you are advocating for this kind of scenario: || ==== network storage ===== || +-----+ SAS +-----+ PCI-e +-----+ Ethernet |DISKS| ===> | RAM | ====> | NIC | ======== ... +-----+ +-----+ +-----+ || ===== streaming server ====== || +-----+ PCI-e +-----+ PCI-e +-----+ Ethernet +--------+ ... ==> | NIC | ====> | RAM | ====> | NIC | =======> | client | +-----+ +-----+ +-----+ +--------+ I''m not constrained on CPU (so hooking up multiple streaming servers to one backend storage doesn''t really make sense). So what exactly what does this scenario add to my needs (besides needing extra hardware in both the storage and server (10G NICs, cabling, modules, etc.)? I''m not saying no, I''d love to improve the throughput, IOPS and latency characteristics of my systems.> I can''t speak for the S7000 range. I ignored that entire product line > because when I asked about it the markup was insane compared to just > buying X4500/X4540s. The price for Oracle kit isn''t remotely tenable, so > the death of the X45xx range is a moot point for me anyway, since I > couldn''t afford it. > > [1] Just in case, you also shouldn''t be adding any particularly > significant latency either. Jitter, maybe, depending on the specifics > of the streams involved. > >> Saso > > Julian > -- > Julian King > Computer Officer, University of Cambridge, Unix Support > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- Saso
Sašo Kiselkov
2011-Apr-08 17:51 UTC
[zfs-discuss] Network video streaming [Was: Re: X4540 no next-gen product?]
On 04/08/2011 07:45 PM, Sa?o Kiselkov wrote:> On 04/08/2011 07:22 PM, J.P. King wrote: >> >>> No, I haven''t tried a S7000, but I''ve tried other kinds of network >>> storage and from a design perspective, for my applications, it doesn''t >>> even make a single bit of sense. I''m talking about high-volume real-time >>> video streaming, where you stream 500-1000 (x 8Mbit/s) live streams from >>> a machine over UDP. Having to go over the network to fetch the data from >>> a different machine is kind of like building a proxy which doesn''t >>> really do anything - if the data is available from a different machine >>> over the network, then why the heck should I just put another machine in >>> the processing path? For my applications, I need a machine with as few >>> processing components between the disks and network as possible, to >>> maximize throughput, maximize IOPS and minimize latency and jitter. >> >> I can''t speak for this particular situation or solution, but I think in >> principle you are wrong. Networks are fast. Hard drives are slow. Put >> a 10G connection between your storage and your front ends and you''ll >> have the bandwidth[1]. Actually if you really were hitting 1000x8Mbits >> I''d put 2, but that is just a question of scale. In a different >> situation I have boxes which peak at around 7 Gb/s down a 10G link (in >> reality I don''t need that much because it is all about the IOPS for >> me). That is with just twelve 15k disks. Your situation appears to be >> pretty ideal for storage hardware, so perfectly achievable from an >> appliance. > > I envision this kind of scenario (using my fancy ASCII art skills :-)): > > || ========= streaming server ======== || > +-----+ SAS +-----+ PCI-e +-----+ Ethernet +--------+ > |DISKS| ===> | RAM | ====> | NIC | =======> | client | > +-----+ +-----+ +-----+ +--------+ > > And you are advocating for this kind of scenario: > > || ==== network storage ===== || > +-----+ SAS +-----+ PCI-e +-----+ Ethernet > |DISKS| ===> | RAM | ====> | NIC | ======== ... > +-----+ +-----+ +-----+ > > || ===== streaming server ====== || > +-----+ PCI-e +-----+ PCI-e +-----+ Ethernet +--------+ > ... ==> | NIC | ====> | RAM | ====> | NIC | =======> | client | > +-----+ +-----+ +-----+ +--------+ > > I''m not constrained on CPU (so hooking up multiple streaming servers to > one backend storage doesn''t really make sense). > So what exactly what does this scenario add to my needs (besides needing > extra hardware in both the storage and server (10G NICs, cabling, > modules, etc.)? I''m not saying no, I''d love to improve the throughput, > IOPS and latency characteristics of my systems. > >> I can''t speak for the S7000 range. I ignored that entire product line >> because when I asked about it the markup was insane compared to just >> buying X4500/X4540s. The price for Oracle kit isn''t remotely tenable, so >> the death of the X45xx range is a moot point for me anyway, since I >> couldn''t afford it. >> >> [1] Just in case, you also shouldn''t be adding any particularly >> significant latency either. Jitter, maybe, depending on the specifics >> of the streams involved. >> >>> Saso >> >> Julian >> -- >> Julian King >> Computer Officer, University of Cambridge, Unix Support >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > -- > Saso > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discussP.S. I forgot to add that I need plenty of storage space also, so while 15k disks are great for throughput and IOPS, they are way too expensive. Also, I hit the IOPS wall before I hit throughput limits (a 3x 4 disk raid-z pool maxes out at round 200 concurrent read streams + 30 live-ingest write streams). -- Saso
jpk28 at cam.ac.uk said:> I can''t speak for this particular situation or solution, but I think in > principle you are wrong. Networks are fast. Hard drives are slow. Put a > 10G connection between your storage and your front ends and you''ll have the > bandwidth[1]. Actually if you really were hitting 1000x8Mbits I''d put 2, > but that is just a question of scale. In a different situation I have boxes > which peak at around 7 Gb/s down a 10G link (in reality I don''t need that > much because it is all about the IOPS for me). That is with just twelve 15k > disks. Your situation appears to be pretty ideal for storage hardware, so > perfectly achievable from an appliance.Depending on usage, I disagree with your bandwidth and latency figures above. An X4540, or an X4170 with J4000 JBOD''s, has more bandwidth to its disks than 10Gbit ethernet. You would need three 10GbE interfaces between your CPU and the storage appliance to equal the bandwidth of a single 8-port 3Gb/s SAS HBA (five of them for 6Gb/s SAS). It''s also the case that the Unified Storage platform doesn''t have enough bandwidth to drive more than four 10GbE ports at their full speed: http://dtrace.org/blogs/brendan/2009/09/22/7410-hardware-update-and-analyzing-t he-hypertransport/ We have a customer (internal to the university here) that does high throughput gene sequencing. They like a server which can hold the large amounts of data, do a first pass analysis on it, and then serve it up over the network to a compute cluster for further computation. Oracle has nothing in their product line (anymore) to meet that need. They ended up ordering an 8U chassis w/40x 2TB drives in it, and are willing to pay the $2k/yr retail ransom to Oracle to run Solaris (ZFS) on it, at least for the first year. Maybe OpenIndiana next year, we''ll see. Bye Oracle.... Regards, Marion
Sounds like many of us are in a similar situation. To clarify my original post. The goal here was to continue with what was a cost effective solution to some of our Storage requirements. I''m looking for hardware that wouldn''t cause me to get the run around from the Oracle support folks, finger pointing between vendors, or have lots of grief from an untested combination of parts. If this isn''t possible we''ll certainly find a another solution. I already know it won''t be the 7000 series. Thank you, Chris Banal Marion Hakanson wrote:> jpk28 at cam.ac.uk said: >> I can''t speak for this particular situation or solution, but I think in >> principle you are wrong. Networks are fast. Hard drives are slow. Put a >> 10G connection between your storage and your front ends and you''ll have the >> bandwidth[1]. Actually if you really were hitting 1000x8Mbits I''d put 2, >> but that is just a question of scale. In a different situation I have boxes >> which peak at around 7 Gb/s down a 10G link (in reality I don''t need that >> much because it is all about the IOPS for me). That is with just twelve 15k >> disks. Your situation appears to be pretty ideal for storage hardware, so >> perfectly achievable from an appliance. > > Depending on usage, I disagree with your bandwidth and latency figures > above. An X4540, or an X4170 with J4000 JBOD''s, has more bandwidth > to its disks than 10Gbit ethernet. You would need three 10GbE interfaces > between your CPU and the storage appliance to equal the bandwidth of a > single 8-port 3Gb/s SAS HBA (five of them for 6Gb/s SAS). > > It''s also the case that the Unified Storage platform doesn''t have enough > bandwidth to drive more than four 10GbE ports at their full speed: > http://dtrace.org/blogs/brendan/2009/09/22/7410-hardware-update-and-analyzing-t > he-hypertransport/ > > We have a customer (internal to the university here) that does high > throughput gene sequencing. They like a server which can hold the large > amounts of data, do a first pass analysis on it, and then serve it up > over the network to a compute cluster for further computation. Oracle > has nothing in their product line (anymore) to meet that need. They > ended up ordering an 8U chassis w/40x 2TB drives in it, and are willing > to pay the $2k/yr retail ransom to Oracle to run Solaris (ZFS) on it, > at least for the first year. Maybe OpenIndiana next year, we''ll see. > > Bye Oracle.... > > Regards, > > Marion > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 4/8/2011 1:58 PM, Chris Banal wrote:> Sounds like many of us are in a similar situation. > > To clarify my original post. The goal here was to continue with what > was a cost effective solution to some of our Storage requirements. I''m > looking for hardware that wouldn''t cause me to get the run around from > the Oracle support folks, finger pointing between vendors, or have > lots of grief from an untested combination of parts. If this isn''t > possible we''ll certainly find a another solution. I already know it > won''t be the 7000 series. > > Thank you, > Chris Banal > >Talk to HP then. They still sell Officially Supported Solaris servers and disk storage systems in more varieties than Oracle does. The StorageWorks 600 Modular Disk System may be what you''re looking for (70 x 2.5" drives per enclosure, 5U, SAS/SATA/FC attachment to any server, $35k list price for 70TB). Or the StorageWorks 70 Modular Disk Array (25 x 2.5" drives, 1U, SAS attachment, $11k list price for 12.5TB) -Erik> Marion Hakanson wrote: >> jpk28 at cam.ac.uk said: >>> I can''t speak for this particular situation or solution, but I think in >>> principle you are wrong. Networks are fast. Hard drives are slow. >>> Put a >>> 10G connection between your storage and your front ends and you''ll >>> have the >>> bandwidth[1]. Actually if you really were hitting 1000x8Mbits I''d >>> put 2, >>> but that is just a question of scale. In a different situation I >>> have boxes >>> which peak at around 7 Gb/s down a 10G link (in reality I don''t >>> need that >>> much because it is all about the IOPS for me). That is with just >>> twelve 15k >>> disks. Your situation appears to be pretty ideal for storage >>> hardware, so >>> perfectly achievable from an appliance. >> >> Depending on usage, I disagree with your bandwidth and latency figures >> above. An X4540, or an X4170 with J4000 JBOD''s, has more bandwidth >> to its disks than 10Gbit ethernet. You would need three 10GbE >> interfaces >> between your CPU and the storage appliance to equal the bandwidth of a >> single 8-port 3Gb/s SAS HBA (five of them for 6Gb/s SAS). >> >> It''s also the case that the Unified Storage platform doesn''t have enough >> bandwidth to drive more than four 10GbE ports at their full speed: >> http://dtrace.org/blogs/brendan/2009/09/22/7410-hardware-update-and-analyzing-t >> >> he-hypertransport/ >> >> We have a customer (internal to the university here) that does high >> throughput gene sequencing. They like a server which can hold the large >> amounts of data, do a first pass analysis on it, and then serve it up >> over the network to a compute cluster for further computation. Oracle >> has nothing in their product line (anymore) to meet that need. They >> ended up ordering an 8U chassis w/40x 2TB drives in it, and are willing >> to pay the $2k/yr retail ransom to Oracle to run Solaris (ZFS) on it, >> at least for the first year. Maybe OpenIndiana next year, we''ll see. >> >> Bye Oracle.... >> >> Regards, >> >> Marion >> >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA
On Fri, 8 Apr 2011, J.P. King wrote:> > I can''t speak for this particular situation or solution, but I think in > principle you are wrong. Networks are fast. Hard drives are slow. Put aBut memory is much faster than either. It most situations the data would already be buffered in the X4540''s memory so that it is instantly available. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
On 4/8/2011 4:50 PM, Bob Friesenhahn wrote:> On Fri, 8 Apr 2011, J.P. King wrote: >> >> I can''t speak for this particular situation or solution, but I think >> in principle you are wrong. Networks are fast. Hard drives are >> slow. Put a > > But memory is much faster than either. It most situations the data > would already be buffered in the X4540''s memory so that it is > instantly available. > > BobCertainly, as a low-end product, the X4540 (and X4500) offered unmatched flexibility and performance per dollar. It *is* sad to see them go. But, given Oracle''s strategic direction, is anyone really surprised? PS - Nexenta, I think you''ve got a product position opportunity here... PPS - about the closest thing Oracle makes to the X4540 now is the X4270 M2 in the 2.5" drive config - 24 x 2.5" drives, 2 x Westmere-EP CPUs, in a 2U rack cabinet, somewhere around $25k (list) for the 24x500GB SATA model with (2) 6-core Westmeres + 16GB RAM. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA
Can anyone comment on Solaris with zfs on HP systems? Do things work reliably? When there is trouble how many hoops does HP make you jump through (how painful is it to get a part replaced that isn''t flat out smokin'')? Have you gotten bounced between vendors? Thanks, Chris Erik Trimble wrote:> > Talk to HP then. They still sell Officially Supported Solaris servers > and disk storage systems in more varieties than Oracle does. > > The StorageWorks 600 Modular Disk System may be what you''re looking for > (70 x 2.5" drives per enclosure, 5U, SAS/SATA/FC attachment to any > server, $35k list price for 70TB). Or the StorageWorks 70 Modular Disk > Array (25 x 2.5" drives, 1U, SAS attachment, $11k list price for 12.5TB) > > -Erik >
On 04/ 9/11 03:20 AM, Mark Sandrock wrote:> On Apr 8, 2011, at 7:50 AM, Evaldas Auryla<evaldas.auryla at edqm.eu> wrote: >> On 04/ 8/11 01:14 PM, Ian Collins wrote: >>>> You have built-in storage failover with an AR cluster; >>>> and they do NFS, CIFS, iSCSI, HTTP and WebDav >>>> out of the box. >>>> >>>> And you have fairly unlimited options for application servers, >>>> once they are decoupled from the storage servers. >>>> >>>> It doesn''t seem like much of a drawback -- although it >>>> may be for some smaller sites. I see AR clusters going in >>>> in local high schools and small universities. >>>> >>> Which is all fine and dandy if you have a green field, or are willing to >>> re-architect your systems. We just wanted to add a couple more x4540s! >> Hi, same here, it''s a sad news that Oracle decided to stop x4540s production line. Before, ZFS geeks had choice - buy 7000 series if you want quick "out of the box" storage with nice GUI, or build your own storage with x4540 line, which by the way has brilliant engineering design, the choice is gone now. > Okay, so what is the great advantage > of an X4540 versus X86 server plus > disk array(s)? >One less x86 box (even more of an issue now we have to mortgage the children for support), a lot less $. Not to mention an existing infrastructure built using X4540s and me looking a fool explaining to the client they can''t get any more so the systems we have spent two years building up are a dead end. One size does not fit all, choice is good for business. -- Ian.
On 04/ 9/11 02:26 AM, David Magda wrote:> On Fri, April 8, 2011 10:06, Darren J Moffat wrote: > >>> They may be storage appliances, but the user can not put their own >>> software on them. This limits the appliance to only the features that >>> Oracle decides to put on it. >> Isn''t that the very definition of an Appliance ? > Yes, but the OP wasn''t looking for an appliance, he were looking for a > (general) server that could hold lots of disks. The X4540 was > well-designed and suited their need for storage and CPU (as it did > Greenplum as well); it was fairly unique as a design. >One thing often overlooked with the X4540 is the processing power is way more than is needed for simply managing storage, so they are ideal for consolidating and collocating applications with their data. -- Ian.
> Sounds like many of us are in a similar situation. > > To clarify my original post. The goal here was to continue with what was > a cost effective solution to some of our Storage requirements. I''m > looking for hardware that wouldn''t cause me to get the run around from > the Oracle support folks, finger pointing between vendors, or have lots > of grief from an untested combination of parts. If this isn''t possible > we''ll certainly find a another solution. I already know it won''t be the > 7000 series. > > Thank you, > Chris Banal >For us the unfortunate answer to the situation was to abandon Oracle/Sun and ZFS entirely. Despite evaluating and considering ZFS on other platforms it just wasn''t worth the trouble; we need storage today. While we will likely expand our existing fleet of X4540''s as much as possible with JBOD that will be the end of that solution and our use of ZFS. Ultimately a large storage vendor (EMC) came to the table with a solution similar to the X4540 at a $/GB and $/iop level that no other vendor could even get close to. We will revisit this decision later depending on the progress of Illumos and others but for now things are still too uncertain to make the financial commitment. - Adam
On Apr 8, 2011, at 9:39 PM, Ian Collins <ian at ianshome.com> wrote:> On 04/ 9/11 03:20 AM, Mark Sandrock wrote: >> On Apr 8, 2011, at 7:50 AM, Evaldas Auryla<evaldas.auryla at edqm.eu> wrote: >>> On 04/ 8/11 01:14 PM, Ian Collins wrote: >>>>> You have built-in storage failover with an AR cluster; >>>>> and they do NFS, CIFS, iSCSI, HTTP and WebDav >>>>> out of the box. >>>>> >>>>> And you have fairly unlimited options for application servers, >>>>> once they are decoupled from the storage servers. >>>>> >>>>> It doesn''t seem like much of a drawback -- although it >>>>> may be for some smaller sites. I see AR clusters going in >>>>> in local high schools and small universities. >>>>> >>>> Which is all fine and dandy if you have a green field, or are willing to >>>> re-architect your systems. We just wanted to add a couple more x4540s! >>> Hi, same here, it''s a sad news that Oracle decided to stop x4540s production line. Before, ZFS geeks had choice - buy 7000 series if you want quick "out of the box" storage with nice GUI, or build your own storage with x4540 line, which by the way has brilliant engineering design, the choice is gone now. >> Okay, so what is the great advantage >> of an X4540 versus X86 server plus >> disk array(s)? >> > One less x86 box (even more of an issue now we have to mortgage the children for support), a lot less $. > > Not to mention an existing infrastructure built using X4540s and me looking a fool explaining to the client they can''t get any more so the systems we have spent two years building up are a dead end. > > One size does not fit all, choice is good for business.I''m not arguing. If it were up to me, we''d still be selling those boxes. Mark> > -- > Ian.
On 04/ 9/11 03:53 PM, Mark Sandrock wrote:> I''m not arguing. If it were up to me, > we''d still be selling those boxes.Maybe you could whisper in the right ear? :) -- Ian.
On Apr 8, 2011, at 11:19 PM, Ian Collins <ian at ianshome.com> wrote:> On 04/ 9/11 03:53 PM, Mark Sandrock wrote: >> I''m not arguing. If it were up to me, >> we''d still be selling those boxes. > > Maybe you could whisper in the right ear?I wish. I''d have a long list if I could do that. Mark> :) > > -- > Ian.
On 4/8/2011 9:19 PM, Ian Collins wrote:> On 04/ 9/11 03:53 PM, Mark Sandrock wrote: >> I''m not arguing. If it were up to me, >> we''d still be selling those boxes. > > Maybe you could whisper in the right ear? > > :) >Three little words are all that Oracle Product Managers hear: "Business case justification" <wry smile> I want my J4000''s back, too. And, I still want something like HP''s MSA 70 (25 x 2.5" drive JBOD in a 2U formfactor) -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800)
On Fri, Apr 8 at 18:08, Chris Banal wrote:>Can anyone comment on Solaris with zfs on HP systems? Do things work >reliably? When there is trouble how many hoops does HP make you jump >through (how painful is it to get a part replaced that isn''t flat out >smokin'')? Have you gotten bounced between vendors?When I was choosing between HP and Dell about two years ago, the HP RAID adapter wasn''t supported out-of-the-box by solaris, while the Dell T410/610/710 systems were using the Dell SAS-6i/R, which is a rebranded LSI 1068i-R adapter. I believe Dell''s H200 is basically an LSI 9211-8i, which also works well. I can''t comment on HP''s support, I have no experience with it. We now self-support our software (OpenIndiana b148) --eric -- Eric D. Mudama edmudama at bounceswoosh.org
On Fri, Apr 8 at 22:03, Erik Trimble wrote:>I want my J4000''s back, too. And, I still want something like HP''s >MSA 70 (25 x 2.5" drive JBOD in a 2U formfactor)Just noticed that SuperMicro is now selling a 4U 72-bay 2.5" 6Gbit/s SAS chassis, the SC417. Unclear from the documentation how many 6Gbit/s SAS lanes are connected for that many devices though. Maybe that plus a support contract from Sun would be a worthy replacement, though you definitely won''t have a single vendor to contact for service issues. --eric -- Eric D. Mudama edmudama at bounceswoosh.org
On 8 Apr 2011, at 19:43, Marion Hakanson <hakansom at ohsu.edu> wrote:>> which peak at around 7 Gb/s down a 10G link (in reality I don''t need that >> much because it is all about the IOPS for me). That is with just twelve 15k >> disks. > > Depending on usage, I disagree with your bandwidth and latency figures > above. An X4540, or an X4170 with J4000 JBOD''s, has more bandwidth > to its disks than 10Gbit ethernet.Actually I think our figures more or less agree. 12 disks = 7 mbits 48 disks = 4x7mbits What is actually required in practice depends on a lot of factors Julian
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of Julian King > > Actually I think our figures more or less agree. 12 disks = 7 mbits > 48 disks = 4x7mbitsI know that sounds like terrible performance to me. Any time I benchmark disks, a cheap generic SATA can easily sustain 500Mbit, and any decent drive can easily sustain 1Gbit. Of course it''s lower when there''s significant random seeking happening... But if you have a data model which is able to stream sequentially, the above is certainly true.
On 04/09/2011 01:41 PM, Edward Ned Harvey wrote:>> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- >> bounces at opensolaris.org] On Behalf Of Julian King >> >> Actually I think our figures more or less agree. 12 disks = 7 mbits >> 48 disks = 4x7mbits > > I know that sounds like terrible performance to me. Any time I benchmark > disks, a cheap generic SATA can easily sustain 500Mbit, and any decent drive > can easily sustain 1Gbit.I think he mistyped and meant 7gbit/s.> Of course it''s lower when there''s significant random seeking happening... > But if you have a data model which is able to stream sequentially, the above > is certainly true.Unfortunately, this is exactly my scenario, where I want to stream large volumes of data in many concurrent threads over large datasets which have no hope of fitting in RAM or L2ARC and with generally very little locality. -- Saso
On 9 Apr 2011, at 12:59, Sa?o Kiselkov <skiselkov.ml at gmail.com> wrote:> On 04/09/2011 01:41 PM, Edward Ned Harvey wrote: >>> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- >>> bounces at opensolaris.org] On Behalf Of Julian King >>> >>> Actually I think our figures more or less agree. 12 disks = 7 mbits >>> 48 disks = 4x7mbits >> >> I know that sounds like terrible performance to me. Any time I benchmark >> disks, a cheap generic SATA can easily sustain 500Mbit, and any decent drive >> can easily sustain 1Gbit. > > I think he mistyped and meant 7gbit/s.Oops. Yes I did!> >> Of course it''s lower when there''s significant random seeking happening... >> But if you have a data model which is able to stream sequentially, the above >> is certainly true. > > Unfortunately, this is exactly my scenario, where I want to stream large > volumes of data in many concurrent threads over large datasets which > have no hope of fitting in RAM or L2ARC and with generally very little > locality.Clearly one of those situation where any set up will struggle.> > -- > SasoJulian
David Dyer-Bennet
2011-Apr-11 16:08 UTC
[zfs-discuss] Network video streaming [Was: Re: X4540 no next-gen product?]
On 04/08/2011 07:22 PM, J.P. King wrote:> No, I haven''t tried a S7000, but I''ve tried other kinds of network > storage and from a design perspective, for my applications, it doesn''t > even make a single bit of sense. I''m talking about high-volume > real-time > video streaming, where you stream 500-1000 (x 8Mbit/s) live streams > from > a machine over UDP. Having to go over the network to fetch the data > from > a different machine is kind of like building a proxy which doesn''t > really do anything - if the data is available from a different machine > over the network, then why the heck should I just put another machine > in > the processing path? For my applications, I need a machine with as few > processing components between the disks and network as possible, to > maximize throughput, maximize IOPS and minimize latency and jitter.Amusing history here -- the "Thumper" was developed at Kealia specifically for their streaming video server. Sun then bought them, and continued the video server project until Oracle ate them (the Sun Streaming Video Server). That product supported 80,000 (not a typo) 4 megabit/sec video streams if fully configured. (Not off a single thumper, though, I don''t believe.) However, there was a custom hardware board handling streaming, into multiple line cards with multiple 10G optical ethernet interfaces. And a LOT of buffer memory; the card could support 2TB of RAM, though I believe real installations were using 512GB. Data got from the Thumpers to the streaming board over Ethernet, though. In big chunks -- 10MB maybe? (Been a while; I worked on the user interface level, but had little to do with the streaming hardware.) -- David Dyer-Bennet, dd-b at dd-b.net; http://dd-b.net/ Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/ Photos: http://dd-b.net/photography/gallery/ Dragaera: http://dragaera.info