Adding (back) gluster-users.
-Krutika
On Fri, Jun 21, 2019 at 1:09 PM Krutika Dhananjay <kdhananj at redhat.com>
wrote:
>
>
> On Fri, Jun 21, 2019 at 12:43 PM Cristian Del Carlo <
> cristian.delcarlo at targetsolutions.it> wrote:
>
>> Thanks Strahil,
>>
>> in this link
>>
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html/administration_guide/sect-creating_replicated_volumes
>> i see:
>>
>>
>> *Sharding has one supported use case: in the context of providing Red
Hat
>> Gluster Storage as a storage domain for Red Hat Enterprise
Virtualization,
>> to provide storage for live virtual machine images. Note that sharding
is
>> also a requirement for this use case, as it provides significant
>> performance improvements over previous implementations. *
>>
>> The deafult setting in GusterFS 6.1 appears to be:
>>
>> features.shard-block-size 64MB
>>
>> features.shard-lru-limit 16384
>>
>> features.shard-deletion-rate 100
>>
>
> That's right. Based on the tests we'd conducted internally,
we'd found
> 64MB to be a good number both in terms of self-heal and IO performance. 4MB
> is a little on the lower side in that sense. The benefits of some features
> like eager-locking are lost if the shard size is too small. You can perhaps
> run some tests with 64MB shard-block-size to begin with, and tune it if it
> doesn't fit your needs.
>
> -Krutika
>
>
>> Bricks in my case are over an xfs filesystem. I'll try different
>> block-size but if I understand correctly, small block sizes are
preferable
>> to big block sizes and If i have doubt I will put 4M.
>>
>> Very thanks for the warning, message received! :-)
>>
>> Best Regards,
>>
>> Cristian
>>
>>
>> Il giorno gio 20 giu 2019 alle ore 22:13 Strahil Nikolov <
>> hunter86_bg at yahoo.com> ha scritto:
>>
>>> Sharding is complex. It helps to heal faster -as only the shards
that
>>> got changed will be replicated, but imagine a 1GB shard that got
only 512k
>>> updated - in such case you will copy the whole shard to the other
replicas.
>>> RHV & oVirt use a default shard size of 4M which is the exact
size of
>>> the default PE in LVM.
>>>
>>> On the other side, it speeds stuff as gluster can balance the
shards
>>> properly on the replicas and thus you can evenly distribute the
load on the
>>> cluster.
>>> It is not a coincidence that RHV and oVirt use sharding by default.
>>>
>>> Just a warning.
>>> NEVER, EVER, DISABLE SHARDING!!! ONCE ENABLED - STAYS ENABLED!
>>> Don't ask how I learnGrazie dell'avviso, messaggio
ricevuto!t that :)
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>>
>>>
>>> ? ?????????, 20 ??? 2019 ?., 18:32:00 ?. ???????+3, Cristian Del
Carlo <
>>> cristian.delcarlo at targetsolutions.it> ??????:
>>>
>>>
>>> Hi,
>>>
>>> thanks for your help.
>>>
>>> I am planing to use libvirtd with plain KVM.
>>>
>>> Ok i will use libgfapi.
>>>
>>> I'm confused about the use of sharding is it useful in this
>>> configuration? Doesn't sharding help limit the bandwidth in the
event of a
>>> rebalancing?
>>>
>>> In the vm setting so i need to use directsync to avoid corruption.
>>>
>>> Thanks again,
>>>
>>> Il giorno gio 20 giu 2019 alle ore 12:25 Strahil <hunter86_bg at
yahoo.com>
>>> ha scritto:
>>>
>>> Hi,
>>>
>>> Are you planing to use oVirt or plain KVM or openstack?
>>>
>>> I would recommend you to use gluster v6.1 as it is the latest
stable
>>> version and will have longer support than the older versions.
>>>
>>> Fuse vs libgfapi - use the latter as it has better performance and
less
>>> overhead on the host.oVirt does supports both libgfapi and fuse.
>>>
>>> Also, use replica 3 because you will have better read performance
>>> compared to replica 2 arbiter 1.
>>>
>>> Sharding is a tradeoff between CPU (when there is no sharding ,
gluster
>>> shd must calculate the offset of the VM disk) and bandwidth (whole
shard
>>> is being replicated despite even 512 need to be synced).
>>>
>>> If you will do live migration - you do not want to cache in order
to
>>> avoid corruption.
>>> Thus oVirt is using direct I/O.
>>> Still, you can check the gluster settings mentioned in Red Hat
>>> documentation for Virt/openStack .
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>> On Jun 20, 2019 13:12, Cristian Del Carlo <
>>> cristian.delcarlo at targetsolutions.it> wrote:
>>>
>>> Hi,
>>>
>>> I'm testing glusterfs before using it in production, it should
be used
>>> to store vm for nodes with libvirtd.
>>>
>>> In production I will have 4 nodes connected with a dedicated
20gbit/s
>>> network.
>>>
>>> Which version to use in production on a centos 7.x? Should I use
Gluster
>>> version 6?
>>>
>>> To make the volume available to libvirtd the best method is to use
FUSE?
>>>
>>> I see that stripped is deprecated. Is it reasonable to use the
volume
>>> with 3 replicas on 4 nodes and sharding enabled?
>>> Is there convenience to use sharding volume in this context? I
think
>>> could positive inpact in read performance or rebalance. Is it true?
>>>
>>> In the vm configuration I use the virtio disk. How is it better to
set
>>> the disk cache to get the best performances none, default or
writeback?
>>>
>>> Thanks in advance for your patience and answers.
>>>
>>> Thanks,
>>>
>>>
>>> *Cristian Del Carlo*
>>>
>>>
>>>
>>> --
>>>
>>>
>>> *Cristian Del Carlo*
>>>
>>> *Target Solutions s.r.l.*
>>>
>>> *T* +39 0583 1905621
>>> *F* +39 0583 1905675
>>> *@* cristian.delcarlo at targetsolutions.it
>>>
>>> http://www.targetsolutions.it
>>> P.IVA e C.Fiscale: 01815270465 Reg. Imp. di Lucca
>>> Capitale Sociale: ?11.000,00 iv - REA n? 173227
>>>
>>> Il testo e gli eventuali documenti trasmessi contengono
informazioni
>>> riservate al destinatario indicato. La seguente e-mail e'
confidenziale e
>>> la sua riservatezza e' tutelata legalmente dal Decreto
Legislativo 196 del
>>> 30/06/2003 (Codice di tutela della privacy). La lettura, copia o
altro uso
>>> non autorizzato o qualsiasi altra azione derivante dalla conoscenza
di
>>> queste informazioni sono rigorosamente vietate. Qualora abbiate
ricevuto
>>> questo documento per errore siete cortesemente pregati di darne
immediata
>>> comunicazione al mittente e di provvedere, immediatamente, alla sua
>>> distruzione.
>>>
>>
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20190621/83d78772/attachment.html>