Displaying 2 results from an estimated 2 matches for "700mbyte".
2011 May 12
1
Slow reading speed over RDMA
...r lustre setup with a constant
400MByte/s throughput, for the same tests, with the same amount of disks
going through the same RAID controller. Both tests were ran at different
times.
As comparison, for the sequential reading tests, both gluster and lustre
give me, respectively, results of 600 to 700MByte/s.
The gluster configuration files have no extra modifications, the disks are
formatted with ext3 and created via:
gluster volume create test stripe 16 transport rdma 10.1.0.4:/disk1
10.1.0.4:/disk2 ... 10.1.0.4:/disk16
Using DD to write/read from each of the disks gives me about 100MByte/s....
2008 Jun 13
1
Fwd: RHEL5 network throughput/scalability
...de, that is the bandwidth , and
throughput seen from the NFS sever on the other side of the nodes shows a
liner increment from around 100+Mbyte/sec up to 1Gbyte/sec, however when we
add another extra node to the equation the bandwidth/throughput becomes
erratic/inconsistent, and drops to around 500-700Mbyte/sec. however if i try
the same setup with RHEL4U6 i do not get the same behaviour it sustains the
bandwidth at 1Gbyte/sec. the setup is like this 48 nodes sharing 48 port
access switch that is up linked using 10g link to a CISCO 6509 switch which
is linked to a Clustered NFS File system that consi...