Hi,
We finally managed to do the dd tests for an NFS-mounted gluster file
system. The profile results during that test are in
http://mseas.mit.edu/download/phaley/GlusterUsers/profile_gluster_nfs_test
The summary of the dd tests are
* writing to gluster disk mounted with fuse: 5 Mb/s
* writing to gluster disk mounted with nfs: 200 Mb/s
Pat
On 05/05/2017 08:11 PM, Pat Haley wrote:>
> Hi,
>
> We redid the dd tests (this time using conv=sync oflag=sync to avoid
> caching questions). The profile results are in
>
> http://mseas.mit.edu/download/phaley/GlusterUsers/profile_gluster_fuse_test
>
>
> On 05/05/2017 12:47 PM, Ravishankar N wrote:
>> On 05/05/2017 08:42 PM, Pat Haley wrote:
>>>
>>> Hi Pranith,
>>>
>>> I presume you are asking for some version of the profile data that
>>> just shows the dd test (or a repeat of the dd test). If yes, how
do
>>> I extract just that data?
>> Yes, that is what he is asking for. Just clear the existing profile
>> info using `gluster volume profile volname clear` and run the dd test
>> once. Then when you run profile info again, it should just give you
>> the stats for the dd test.
>>>
>>> Thanks
>>>
>>> Pat
>>>
>>>
>>>
>>> On 05/05/2017 10:58 AM, Pranith Kumar Karampuri wrote:
>>>> hi Pat,
>>>> Let us concentrate on the performance numbers part for
now.
>>>> We will look at the permissions one after this?
>>>>
>>>> As per the profile info, only 2.6% of the work-load is writes.
>>>> There are too many Lookups.
>>>>
>>>> Would it be possible to get the data for just the dd test you
were
>>>> doing earlier?
>>>>
>>>>
>>>> On Fri, May 5, 2017 at 8:14 PM, Pat Haley <phaley at mit.edu
>>>> <mailto:phaley at mit.edu>> wrote:
>>>>
>>>>
>>>> Hi Pranith & Ravi,
>>>>
>>>> A couple of quick questions
>>>>
>>>> We have profile turned on. Are there specific queries we
should
>>>> make that would help debug our configuration? (The default
>>>> profile info was previously sent in
>>>>
http://lists.gluster.org/pipermail/gluster-users/2017-May/030840.html
>>>>
<http://lists.gluster.org/pipermail/gluster-users/2017-May/030840.html>
>>>> but I'm not sure if that is what you were looking for.)
>>>>
>>>> We also started to do a test on serving gluster over NFS.
We
>>>> rediscovered an issue we previously reported (
>>>>
http://lists.gluster.org/pipermail/gluster-users/2016-September/028289.html
>>>>
<http://lists.gluster.org/pipermail/gluster-users/2016-September/028289.html>
>>>> ) in that the NFS mounted version was ignoring the group
write
>>>> permissions. What specific information would be useful in
>>>> debugging this?
>>>>
>>>> Thanks
>>>>
>>>> Pat
>>>>
>>>>
>>>>
>>>> On 04/14/2017 03:01 AM, Ravishankar N wrote:
>>>>> On 04/14/2017 12:20 PM, Pranith Kumar Karampuri wrote:
>>>>>>
>>>>>>
>>>>>> On Sat, Apr 8, 2017 at 10:28 AM, Ravishankar N
>>>>>> <ravishankar at redhat.com
<mailto:ravishankar at redhat.com>> wrote:
>>>>>>
>>>>>> Hi Pat,
>>>>>>
>>>>>> I'm assuming you are using gluster native
(fuse mount).
>>>>>> If it helps, you could try mounting it via
gluster NFS
>>>>>> (gnfs) and then see if there is an improvement
in speed.
>>>>>> Fuse mounts are slower than gnfs mounts but you
get the
>>>>>> benefit of avoiding a single point of failure.
Unlike
>>>>>> fuse mounts, if the gluster node containing the
gnfs
>>>>>> server goes down, all mounts done using that
node will
>>>>>> fail). For fuse mounts, you could try tweaking
the
>>>>>> write-behind xlator settings to see if it
helps. See the
>>>>>> performance.write-behind and
>>>>>> performance.write-behind-window-size options in
`gluster
>>>>>> volume set help`. Of course, even for gnfs
mounts, you
>>>>>> can achieve fail-over by using CTDB.
>>>>>>
>>>>>>
>>>>>> Ravi,
>>>>>> Do you have any data that suggests fuse
mounts are
>>>>>> slower than gNFS servers?
>>>>> I have heard anecdotal evidence time and again on the
ML and
>>>>> IRC, which is why I wanted to compare it with NFS
numbers on
>>>>> his setup.
>>>>>>
>>>>>> Pat,
>>>>>> I see that I am late to the thread, but do
you happen
>>>>>> to have "profile info" of the workload?
>>>>>>
>>>>>> You can follow
>>>>>>
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Monitoring%20Workload/
>>>>>>
<https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Monitoring%20Workload/>
>>>>>> to get the information.
>>>>> Yeah, Let's see if profile info shows up anything
interesting.
>>>>> -Ravi
>>>>>>
>>>>>>
>>>>>> Thanks,
>>>>>> Ravi
>>>>>>
>>>>>>
>>>>>> On 04/08/2017 12:07 AM, Pat Haley wrote:
>>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> We noticed a dramatic slowness when writing
to a gluster
>>>>>>> disk when compared to writing to an NFS
disk.
>>>>>>> Specifically when using dd (data
duplicator) to write a
>>>>>>> 4.3 GB file of zeros:
>>>>>>>
>>>>>>> * on NFS disk (/home): 9.5 Gb/s
>>>>>>> * on gluster disk (/gdata): 508 Mb/s
>>>>>>>
>>>>>>> The gluser disk is 2 bricks joined
together, no
>>>>>>> replication or anything else. The hardware
is
>>>>>>> (literally) the same:
>>>>>>>
>>>>>>> * one server with 70 hard disks and a
hardware RAID card.
>>>>>>> * 4 disks in a RAID-6 group (the NFS
disk)
>>>>>>> * 32 disks in a RAID-6 group (the max
allowed by the
>>>>>>> card, /mnt/brick1)
>>>>>>> * 32 disks in another RAID-6 group
(/mnt/brick2)
>>>>>>> * 2 hot spare
>>>>>>>
>>>>>>> Some additional information and more tests
results
>>>>>>> (after changing the log level):
>>>>>>>
>>>>>>> glusterfs 3.7.11 built on Apr 27 2016
14:09:22
>>>>>>> CentOS release 6.8 (Final)
>>>>>>> RAID bus controller: LSI Logic / Symbios
Logic MegaRAID
>>>>>>> SAS-3 3108 [Invader] (rev 02)
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> *Create the file to /gdata (gluster)*
>>>>>>> [root at mseas-data2 gdata]# dd
if=/dev/zero
>>>>>>> of=/gdata/zero1 bs=1M count=1000
>>>>>>> 1000+0 records in
>>>>>>> 1000+0 records out
>>>>>>> 1048576000 bytes (1.0 GB) copied, 1.91876
s, *546 MB/s*
>>>>>>>
>>>>>>> *Create the file to /home (ext4)*
>>>>>>> [root at mseas-data2 gdata]# dd
if=/dev/zero of=/home/zero1
>>>>>>> bs=1M count=1000
>>>>>>> 1000+0 records in
>>>>>>> 1000+0 records out
>>>>>>> 1048576000 bytes (1.0 GB) copied, 0.686021
s, *1.5 GB/s
>>>>>>> - *3 times as fast*
>>>>>>>
>>>>>>>
>>>>>>> Copy from /gdata to /gdata (gluster to
gluster)
>>>>>>> *[root at mseas-data2 gdata]# dd
if=/gdata/zero1
>>>>>>> of=/gdata/zero2
>>>>>>> 2048000+0 records in
>>>>>>> 2048000+0 records out
>>>>>>> 1048576000 bytes (1.0 GB) copied, 101.052
s, *10.4 MB/s*
>>>>>>> - realllyyy slooowww
>>>>>>>
>>>>>>>
>>>>>>> *Copy from /gdata to /gdata* *2nd time
*(gluster to
>>>>>>> gluster)**
>>>>>>> [root at mseas-data2 gdata]# dd
if=/gdata/zero1 of=/gdata/zero2
>>>>>>> 2048000+0 records in
>>>>>>> 2048000+0 records out
>>>>>>> 1048576000 bytes (1.0 GB) copied, 92.4904
s, *11.3 MB/s*
>>>>>>> - realllyyy slooowww again
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> *Copy from /home to /home (ext4 to ext4)*
>>>>>>> [root at mseas-data2 gdata]# dd
if=/home/zero1 of=/home/zero2
>>>>>>> 2048000+0 records in
>>>>>>> 2048000+0 records out
>>>>>>> 1048576000 bytes (1.0 GB) copied, 3.53263
s, *297 MB/s
>>>>>>> *30 times as fast
>>>>>>>
>>>>>>>
>>>>>>> *Copy from /home to /home (ext4 to ext4)*
>>>>>>> [root at mseas-data2 gdata]# dd
if=/home/zero1 of=/home/zero3
>>>>>>> 2048000+0 records in
>>>>>>> 2048000+0 records out
>>>>>>> 1048576000 bytes (1.0 GB) copied, 4.1737 s,
*251 MB/s* -
>>>>>>> 30 times as fast
>>>>>>>
>>>>>>>
>>>>>>> As a test, can we copy data directly to the
xfs
>>>>>>> mountpoint (/mnt/brick1) and bypass
gluster?
>>>>>>>
>>>>>>>
>>>>>>> Any help you could give us would be
appreciated.
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>> --
>>>>>>>
>>>>>>>
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
>>>>>>> Pat Haley
Email:phaley at mit.edu <mailto:phaley at mit.edu>
>>>>>>> Center for Ocean Engineering Phone:
(617) 253-6824
>>>>>>> Dept. of Mechanical Engineering Fax:
(617) 253-8125
>>>>>>> MIT, Room
5-213http://web.mit.edu/phaley/www/
>>>>>>> 77 Massachusetts Avenue
>>>>>>> Cambridge, MA 02139-4301
>>>>>>>
>>>>>>>
_______________________________________________
>>>>>>> Gluster-users mailing list
>>>>>>> Gluster-users at gluster.org
<mailto:Gluster-users at gluster.org>
>>>>>>>
http://lists.gluster.org/mailman/listinfo/gluster-users
>>>>>>>
<http://lists.gluster.org/mailman/listinfo/gluster-users>
>>>>>>
>>>>>> _______________________________________________
>>>>>> Gluster-users mailing list Gluster-users at
gluster.org
>>>>>> <mailto:Gluster-users at gluster.org>
>>>>>>
http://lists.gluster.org/mailman/listinfo/gluster-users
>>>>>>
<http://lists.gluster.org/mailman/listinfo/gluster-users>
>>>>>>
>>>>>> --
>>>>>> Pranith
>>>>>
>>>> --
>>>>
>>>>
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
>>>> Pat Haley Email:phaley at mit.edu
<mailto:phaley at mit.edu>
>>>> Center for Ocean Engineering Phone: (617) 253-6824
>>>> Dept. of Mechanical Engineering Fax: (617) 253-8125
>>>> MIT, Room 5-213http://web.mit.edu/phaley/www/
>>>> 77 Massachusetts Avenue
>>>> Cambridge, MA 02139-4301
>>>>
>>>> --
>>>> Pranith
>>> --
>>>
>>> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
>>> Pat Haley Email:phaley at mit.edu
>>> Center for Ocean Engineering Phone: (617) 253-6824
>>> Dept. of Mechanical Engineering Fax: (617) 253-8125
>>> MIT, Room 5-213http://web.mit.edu/phaley/www/
>>> 77 Massachusetts Avenue
>>> Cambridge, MA 02139-4301
>>
> --
>
> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
> Pat Haley Email:phaley at mit.edu
> Center for Ocean Engineering Phone: (617) 253-6824
> Dept. of Mechanical Engineering Fax: (617) 253-8125
> MIT, Room 5-213http://web.mit.edu/phaley/www/
> 77 Massachusetts Avenue
> Cambridge, MA 02139-4301
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: phaley at mit.edu
Center for Ocean Engineering Phone: (617) 253-6824
Dept. of Mechanical Engineering Fax: (617) 253-8125
MIT, Room 5-213 http://web.mit.edu/phaley/www/
77 Massachusetts Avenue
Cambridge, MA 02139-4301
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20170510/633585c5/attachment.html>