I test GlusterFS on this equipment: Backend LSI 7000, 80Tb, 24LUN's 4 OSS, Intel based server, connect to LSI via 8Gb FiberChanel, 12Gb RAM 1 Intel based main server, connect to OSS via QDR InfiniBand, 12Gb RAM and 16 Load Generators with 2 Xeon X5670, on board, 12Gb RAM. QDR InfiniBand I use IOR for test , and get next results: /install/mpi/bin/mpirun --hostfile /gluster/C/nodes_1p -np 16 /gluster/C/IOR -F -k -b10G -t1m IOR-2.10.3: MPI Coordinated Test of Parallel I/O Run began: Tue Oct 19 09:27:03 2010 Command line used: /gluster/C/IOR -F -k -b10G -t1m Machine: Linux node1 Summary: api = POSIX test filename = testFile access = file-per-process ordering in a file = sequential offsets ordering inter file= no tasks offsets clients = 16 (1 per node) repetitions = 1 xfersize = 1 MiB blocksize = 10 GiB aggregate filesize = 160 GiB Operation Max (MiB) Min (MiB) Mean (MiB) Std Dev Max (OPs) Min (OPs) Mean (OPs) Std Dev Mean (s) --------- --------- --------- ---------- ------- --------- --------- ---------- ------- -------- write 1720.80 1720.80 1720.80 0.00 1720.80 1720.80 1720.80 0.00 95.21174 EXCEL read 1415.64 1415.64 1415.64 0.00 1415.64 1415.64 1415.64 0.00 115.73604 EXCEL Max Write: 1720.80 MiB/sec (1804.39 MB/sec) Max Read: 1415.64 MiB/sec (1484.40 MB/sec) Run finished: Tue Oct 19 09:30:34 2010 Why read < write ? It's normal for GlusterFS ? best regards Aleksandr ? ?????????, ????????? ????????? ??? "?-?????????" ???: +7(495)744-0980 (1434) -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110504/6587b3d5/attachment.html>
On Wednesday 04 May 2011 12:44 PM, Aleksanyan, Aleksandr wrote:> I test GlusterFS on this equipment: > > Backend LSI 7000, 80Tb, 24LUN's > 4 OSS, Intel based server, connect to LSI via 8Gb FiberChanel, 12Gb RAMCan you please clarify what OSS here means? And, please mention what your GlusterFS configuration looks like. Pavan> 1 Intel based main server, connect to OSS via QDR InfiniBand, 12Gb RAM > and 16 Load Generators with 2 Xeon X5670, on board, 12Gb RAM. QDR InfiniBand > I use IOR for test , and get next results: > /install/mpi/bin/mpirun --hostfile /gluster/C/nodes_1p -np 16 > /gluster/C/IOR -F -k -b10G -t1m > IOR-2.10.3: MPI Coordinated Test of Parallel I/O > Run began: Tue Oct 19 09:27:03 2010 > Command line used: /gluster/C/IOR -F -k -b10G -t1m > Machine: Linux node1 > Summary: > api = POSIX > test filename = testFile > access = file-per-process > ordering in a file = sequential offsets > ordering inter file= no tasks offsets > clients = 16 (1 per node) > repetitions = 1 > xfersize = 1 MiB > blocksize = 10 GiB > aggregate filesize = 160 GiB > Operation Max (MiB) Min (MiB) Mean (MiB) Std Dev Max (OPs) Min (OPs) > Mean (OPs) Std Dev Mean (s) > --------- --------- --------- ---------- ------- --------- --------- > ---------- ------- -------- > write *1720.80* 1720.80 1720.80 0.00 1720.80 1720.80 1720.80 0.00 > 95.21174 EXCEL > read *1415.64* 1415.64 1415.64 0.00 1415.64 1415.64 1415.64 0.00 > 115.73604 EXCEL > Max Write: 1720.80 MiB/sec (1804.39 MB/sec) > Max Read: 1415.64 MiB/sec (1484.40 MB/sec) > Run finished: Tue Oct 19 09:30:34 2010 > Why *read *< *write* ? It's normal for GlusterFS ? > best regards > Aleksandr > ? ?????????, > ????????? ????????? > ??? "?-?????????" > ???: +7(495)744-0980 (1434) > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
On 05/04/2011 03:14 AM, Aleksanyan, Aleksandr wrote:> I test GlusterFS on this equipment:[...]> Max Write: 1720.80 MiB/sec (1804.39 MB/sec) > Max Read: 1415.64 MiB/sec (1484.40 MB/sec)hmmm ... seems low. With 24 bricks we were getting ~10+ GB/s 2 years ago on the 2.0.x series of code. You might have a bottleneck somewhere in the Fibre channel portion of things.> Run finished: Tue Oct 19 09:30:34 2010 > Why *read *< *write* ? It's normal for GlusterFS ?Its generally normal for most cluster/distributed file systems that have any sort of write caching (RAID, brick OS write cache, etc.) You can absorb the write into cache (16 units mean only 10GB ram required per unit to cache), and commit it later. When we do testing on our units, we recommend using data sizes that far exceed any conceivable cache. We regularly do single machine TB sized reads and writes (as well as cluster storage reads and writes in the 1-20TB region) as part of our normal testing regimen. We recommend reporting the non-cached performance numbers as that is what users will often see (as a nominal case). Regards Joe -- Joseph Landman, Ph.D Founder and CEO Scalable Informatics, Inc. email: landman at scalableinformatics.com web : http://scalableinformatics.com http://scalableinformatics.com/sicluster phone: +1 734 786 8423 x121 fax : +1 866 888 3112 cell : +1 734 612 4615