Hi,
I have one more question about the Gluster linear scale-out performance
regarding the "write-behind off" case specifically -- when
"write-behind"
is off, and still the stripe volumes and other settings as early thread
posted, the storage I/O seems not to relate to the number of storage nodes.
In my experiment, no matter I have 2 brick server nodes or 8 brick server
nodes, the aggregated gluster I/O performance is ~100MB/sec. And fio
benchmark measurement gives the same result. If "write behind" is on,
then
the storage performance is linear scale-out along with the # of brick
server nodes increasing.
No matter the write behind option is on/off, I thought the gluster I/O
performance should be pulled and aggregated together as a whole. If that is
the case, why do I get a consistent gluster performance (~100MB/sec) when
"write behind" is off? Please advise me if I misunderstood something.
Thanks,
Qing
On Tue, Jul 21, 2020 at 7:29 PM Qing Wang <qw at g.clemson.edu> wrote:
> fio gives me the correct linear scale-out results, and you're right,
the
> storage cache is the root cause that makes the dd measurement results not
> accurate at all.
>
> Thanks,
> Qing
>
>
> On Tue, Jul 21, 2020 at 2:53 PM Yaniv Kaul <ykaul at redhat.com>
wrote:
>
>>
>>
>> On Tue, 21 Jul 2020, 21:43 Qing Wang <qw at g.clemson.edu> wrote:
>>
>>> Hi Yaniv,
>>>
>>> Thanks for the quick response. I forget to mention I am testing the
>>> writing performance, not reading. In this case, would the client
cache hit
>>> rate still be a big issue?
>>>
>>
>> It's not hitting the storage directly. Since it's also single
threaded,
>> it may also not saturate it. I highly recommend testing properly.
>> Y.
>>
>>
>>> I'll use fio to run my test once again, thanks for the
suggestion.
>>>
>>> Thanks,
>>> Qing
>>>
>>> On Tue, Jul 21, 2020 at 2:38 PM Yaniv Kaul <ykaul at
redhat.com> wrote:
>>>
>>>>
>>>>
>>>> On Tue, 21 Jul 2020, 21:30 Qing Wang <qw at
g.clemson.edu> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I am trying to test Gluster linear scale-out performance by
adding
>>>>> more storage server/bricks, and measure the storage I/O
performance. To
>>>>> vary the storage server number, I create several
"stripe" volumes that
>>>>> contain 2 brick servers, 3 brick servers, 4 brick servers,
and so on. On
>>>>> gluster client side, I used "dd if=/dev/zero
>>>>> of=/mnt/glusterfs/dns_test_data_26g bs=1M count=26000"
to create 26G data
>>>>> (or larger size), and those data will be distributed to the
corresponding
>>>>> gluster servers (each has gluster brick on it) and
"dd" returns the final
>>>>> I/O throughput. The Internet is 40G infiniband, although I
didn't do any
>>>>> specific configurations to use advanced features.
>>>>>
>>>>
>>>> Your dd command is inaccurate, as it'll hit the client
cache. It is
>>>> also single threaded. I suggest switching to fio.
>>>> Y.
>>>>
>>>>
>>>>> What confuses me is that the storage I/O seems not to
relate to the
>>>>> number of storage nodes, but Gluster documents said it
should be linear
>>>>> scaling. For example, when "write-behind" is on,
and when Infiniband "jumbo
>>>>> frame" (connected mode) is on, I can get ~800 MB/sec
reported by "dd", no
>>>>> matter I have 2 brick servers or 8 brick servers -- for 2
server case, each
>>>>> server can have ~400 MB/sec; for 4 server case, each server
can have
>>>>> ~200MB/sec. That said, each server I/O does aggregate to
the final storage
>>>>> I/O (800 MB/sec), but this is not "linear
scale-out".
>>>>>
>>>>> Can somebody help me to understand why this is the case? I
certainly
>>>>> can have some misunderstanding/misconfiguration here.
Please correct me if
>>>>> I do, thanks!
>>>>>
>>>>> Best,
>>>>> Qing
>>>>> ________
>>>>>
>>>>>
>>>>>
>>>>> Community Meeting Calendar:
>>>>>
>>>>> Schedule -
>>>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>>>> Bridge: https://bluejeans.com/441850968
>>>>>
>>>>> Gluster-users mailing list
>>>>> Gluster-users at gluster.org
>>>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>>>
>>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20200723/ee42cb09/attachment.html>