dear all, second post ;) another question here, also in most examples i noticed the infiniband or 10 GigE recommendation, does this really do any good for the individual server connection? another assumption on my side was that 1 individual server can never saturate a full gigabit link due to the disk throughput limitation, (maybe a few 100 MBps at most) so if every server has a 1 Gigabit connection to a switch which in turn has a 10 GigE uplink it would not be a bottleneck (as long as there are not too many servers sharing the 10 GigE uplink) correct? -- Met Vriendelijke Groet, Randall Ciparo bv Postbus 22248 3003 DE Rotterdam Goudsesingel 178 3011 KD Rotterdam The Netherlands Tel: (31) 10 2136212 Fax: (31) 10 4046291 Direct Tel: (31) 10 2012159 E-mail: randall at ciparo.nl www.ciparo.nl www.aimreclaim.com
dear all, second post ;) another question here, also in most examples i noticed the infiniband or 10 GigE recommendation, does this really do any good for the individual server connection? another assumption on my side was that 1 individual server can never saturate a full gigabit link due to the disk throughput limitation, (maybe a few 100 MBps at most) so if every server has a 1 Gigabit connection to a switch which in turn has a 10 GigE uplink it would not be a bottleneck (as long as there are not too many servers sharing the 10 GigE uplink) correct? -- www.songshu.org Just another collection of nuts
On Tue, Apr 21, 2009 at 9:58 AM, randall <randall at songshu.org> wrote:> dear all, > > second post ;) > > another question here, also in most examples i noticed the infiniband or 10 > GigE recommendation, does this really do any good for the individual server > connection? > another assumption on my side was that 1 individual server can never > saturate a full gigabit link due to the disk throughput limitation, (maybe a > few 100 MBps at most)> so if every server has a 1 Gigabit connection to a switch which in turn has > a 10 GigE uplink it would not be a bottleneck (as long as there are not too > many servers sharing the 10 GigE uplink) > > correct?If one server has multiple disks (or RAIDS) and each is being read simultaneously, it is certainly possible to saturate a single GigE connection under some conditions. Sean -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://zresearch.com/pipermail/gluster-users/attachments/20090421/2d8e917d/attachment-0001.htm>
Sean Davis wrote:> > > On Tue, Apr 21, 2009 at 9:58 AM, randall <randall at songshu.org > <mailto:randall at songshu.org>> wrote: > > dear all, > > second post ;) > > another question here, also in most examples i noticed the > infiniband or 10 GigE recommendation, does this really do any good > for the individual server connection? > another assumption on my side was that 1 individual server can > never saturate a full gigabit link due to the disk throughput > limitation, (maybe a few 100 MBps at most) > > > so if every server has a 1 Gigabit connection to a switch which in > turn has a 10 GigE uplink it would not be a bottleneck (as long as > there are not too many servers sharing the 10 GigE uplink) > > correct? > > > If one server has multiple disks (or RAIDS) and each is being read > simultaneously, it is certainly possible to saturate a single GigE > connection under some conditions. > > Sean > >thnx, that confirms my suspicion then, meaning it would be overkill to use more then a gigabit link considering i never use the kind of hardware that would be able to match these numbers, you''re right, i should have been more precise about what was assumed. -- www.songshu.org Just another collection of nuts
Hello! 2009/4/21 randall <randall at ciparo.nl>:> dear all, > > second post ;) > > another question here, also in most examples i noticed the infiniband or 10 > GigE recommendation, does this really do any good for the individual server > connection?IIRC, another recommendation is a RAID-6 array with 8-12 disks. I''d expect 400-600MB/s on linear read with hardware like that. In that case, 1GigE would be a bottleneck so 10 GigE or Infiniband might be a reasonable recommendation. IIRC, some posts on the mailing list point out that the performance of GlusterFS (and most or all other distributed filesystems) is limited by connection latency (e.g. ''replicate'' has to ask each of the servers if it has a newer version of a file and wait for the answer). On 100Mbit/s it takes longer to push a packet (of equal size) through the wire than it takes at 1GBit/s or even 10GBit/s. Maybe someone with access to a test setup with GigE, 10GigE and/or Infiniband could benchmark this so that others might have a base line of what to expect?> another assumption on my side was that 1 individual server can never > saturate a full gigabit link due to the disk throughput limitation, (maybe a > few 100 MBps at most)AFAIK, a modern disk (>=1TB SATA) can deliver more than 100MB/s locally on linear read: # hdparm -t /dev/sdc /dev/sdc: Timing buffered disk reads: 306 MB in 3.00 seconds = 101.84 MB/sec # fdisk -l /dev/sdc Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes Even a server with just two of these disks in a ''dht'' or ''unify'' configuration might be able to saturate a 1GigE network link under specific conditions. Some Solid State Disks are even rated at 250MB/s for read.> so if every server has a 1 Gigabit connection to a switch which in turn has > a 10 GigE uplink it would not be a bottleneck (as long as there are not too > many servers sharing the 10 GigE uplink) > > correct?I''m not sure about that, but after looking at prices I''d do a lot of testing that the 1GigE network adapter really is a bottleneck before buying 10GigE or Infiniband hardware. On a cost/value comparison GigE might win on small systems with only one or two disks. Harald St?rzebecher
2009/4/23 Harald St?rzebecher <haralds at cs.tu-berlin.de>> Hello! > > 2009/4/21 randall <randall at ciparo.nl>: > > dear all, > > > > second post ;) > > > > another question here, also in most examples i noticed the infiniband or > 10 > > GigE recommendation, does this really do any good for the individual > server > > connection? > > IIRC, another recommendation is a RAID-6 array with 8-12 disks. I''d > expect 400-600MB/s on linear read with hardware like that. In that > case, 1GigE would be a bottleneck so 10 GigE or Infiniband might be a > reasonable recommendation. > > IIRC, some posts on the mailing list point out that the performance of > GlusterFS (and most or all other distributed filesystems) is limited > by connection latency (e.g. ''replicate'' has to ask each of the servers > if it has a newer version of a file and wait for the answer). On > 100Mbit/s it takes longer to push a packet (of equal size) through the > wire than it takes at 1GBit/s or even 10GBit/s. > > Maybe someone with access to a test setup with GigE, 10GigE and/or > Infiniband could benchmark this so that others might have a base line > of what to expect? > > > another assumption on my side was that 1 individual server can never > > saturate a full gigabit link due to the disk throughput limitation, > (maybe a > > few 100 MBps at most) > > AFAIK, a modern disk (>=1TB SATA) can deliver more than 100MB/s > locally on linear read: > > # hdparm -t /dev/sdc > > /dev/sdc: > Timing buffered disk reads: 306 MB in 3.00 seconds = 101.84 MB/sec > # fdisk -l /dev/sdc > > Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes > > > Even a server with just two of these disks in a ''dht'' or ''unify'' > configuration might be able to saturate a 1GigE network link under > specific conditions. > Some Solid State Disks are even rated at 250MB/s for read. > > > so if every server has a 1 Gigabit connection to a switch which in turn > has > > a 10 GigE uplink it would not be a bottleneck (as long as there are not > too > > many servers sharing the 10 GigE uplink) > > > > correct? > > I''m not sure about that, but after looking at prices I''d do a lot of > testing that the 1GigE network adapter really is a bottleneck before > buying 10GigE or Infiniband hardware. On a cost/value comparison GigE > might win on small systems with only one or two disks. >And, while it may be a pain, I think it is possible to aggregate multiple GigE connections.... Sean -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://zresearch.com/pipermail/gluster-users/attachments/20090423/7afb61c6/attachment.htm>