similar to: trusted.ec.dirty attribute

Displaying 20 results from an estimated 5000 matches similar to: "trusted.ec.dirty attribute"

2017 Sep 23
1
EC 1+2
Already read that. Seems that I have to use a multiple of 512, so 512*(3-2) is 512. Seems fine Il 23 set 2017 5:00 PM, "Dmitri Chebotarov" <4dimach at gmail.com> ha scritto: > Hi > > Take a look at this link (under ?Optimal volumes?), for Erasure Coded > volume optimal configuration > > http://docs.gluster.org/Administrator%20Guide/Setting%20Up%20Volumes/ >
2017 Jul 31
2
Hot Tier
Hi, Before you try turning off the perf translators can you send us the following, So we will make sure that the other things haven't gone wrong. can you send us the log files for tier (would be better if you attach other logs too), the version of gluster you are using, the client, and the output for: gluster v info gluster v get v1 performance.io-cache gluster v get v1
2017 Sep 26
2
sparse files on EC volume
Hi Xavi At this time I'm using 'plain' bricks with XFS. I'll be moving to LVM cached bricks. There is no RAID for data bricks, but I'll be using hardware RAID10 for SSD cache disks (I can use 'writeback' cache in this case). 'small file performance' is the main reason I'm looking at different options, i.e. using formated sparse files. I spent considerable
2017 Sep 27
0
sparse files on EC volume
Have you done any testing with replica 2/3? IIRC my replica 2/3 tests out performed EC on smallfile workloads, it may be worth looking into if you can't get EC up to where you need it to be. -b ----- Original Message ----- > From: "Dmitri Chebotarov" <4dimach at gmail.com> > Cc: "gluster-users" <Gluster-users at gluster.org> > Sent: Tuesday,
2017 Sep 23
0
EC 1+2
Hi Take a look at this link (under ?Optimal volumes?), for Erasure Coded volume optimal configuration http://docs.gluster.org/Administrator%20Guide/Setting%20Up%20Volumes/ On Sat, Sep 23, 2017 at 10:01 Gandalf Corvotempesta < gandalf.corvotempesta at gmail.com> wrote: > Is possible to create a dispersed volume 1+2 ? (Almost the same as replica > 3, the same as RAID-6) > > If
2017 Sep 23
3
EC 1+2
Is possible to create a dispersed volume 1+2 ? (Almost the same as replica 3, the same as RAID-6) If yes, how many server I have to add in the future to expand the storage? 1 or 3? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170923/a702ba67/attachment.html>
2017 Aug 01
0
Hot Tier
Hi, You have missed the log files. Can you attach them? On Mon, Jul 31, 2017 at 7:22 PM, Dmitri Chebotarov <4dimach at gmail.com> wrote: > Hi > > At this point I already detached Hot Tier volume to run rebalance. Many > volume settings only take effect for the new data (or rebalance), so I > thought may this was the case with Hot Tier as well. Once rebalance > finishes,
2017 Jul 31
1
Hot Tier
Hi At this point I already detached Hot Tier volume to run rebalance. Many volume settings only take effect for the new data (or rebalance), so I thought may this was the case with Hot Tier as well. Once rebalance finishes, I'll re-attache hot tier. cluster.write-freq-threshold and cluster.read-freq-threshold control number of times data is read/write before it moved to hot tier. In my case
2017 Jul 30
2
Hot Tier
Hi I'm looking for an advise on hot tier feature - how can I tell if the hot tier is working? I've attached replicated-distributed hot tier to an EC volume. Yet, I don't think it's working, at least I don't see any files directly on the bricks (only folder structure). 'Status' command has all 0s and 'In progress' for all servers. ~]# gluster volume tier home
2017 Jul 31
2
Hot Tier
Hi, If it was just reads then the tier daemon won't migrate the files to hot tier. If you create a file or write to a file that file will be made available on the hot tier. On Mon, Jul 31, 2017 at 11:06 AM, Nithya Balachandran <nbalacha at redhat.com> wrote: > Milind and Hari, > > Can you please take a look at this? > > Thanks, > Nithya > > On 31 July 2017 at
2017 May 29
1
Heal operation detail of EC volumes
Hi, When a brick fails in EC, What is the healing read/write data path? Which processes do the operations? Assume a 2GB file is being healed in 16+4 EC configuration. I was thinking that SHD deamon on failed brick host will read 2GB from network and reconstruct its 100MB chunk and write it on to brick. Is this right?
2017 Sep 22
2
sparse files on EC volume
Hello I'm running some tests to compare performance between Gluster FUSE mount and formated sparse files (located on the same Gluster FUSE mount). The Gluster volume is EC (same for both tests). I'm seeing HUGE difference and trying to figure out why. Here is an example: GlusterFUSE mount: # cd /mnt/glusterfs # rm -f testfile1 ; dd if=/dev/zero of=testfile1 bs=1G count=1 1+0 records
2017 Jul 31
0
Hot Tier
Milind and Hari, Can you please take a look at this? Thanks, Nithya On 31 July 2017 at 05:12, Dmitri Chebotarov <4dimach at gmail.com> wrote: > Hi > > I'm looking for an advise on hot tier feature - how can I tell if the hot > tier is working? > > I've attached replicated-distributed hot tier to an EC volume. > Yet, I don't think it's working, at
2017 Jun 08
1
Heal operation detail of EC volumes
On Fri, Jun 2, 2017 at 1:01 AM, Serkan ?oban <cobanserkan at gmail.com> wrote: > >Is it possible that this matches your observations ? > Yes that matches what I see. So 19 files is being in parallel by 19 > SHD processes. I thought only one file is being healed at a time. > Then what is the meaning of disperse.shd-max-threads parameter? If I > set it to 2 then each SHD
2017 Jun 02
1
?==?utf-8?q? Heal operation detail of EC volumes
Hi Serkan, On Thursday, June 01, 2017 21:31 CEST, Serkan ?oban <cobanserkan at gmail.com> wrote: ?>Is it possible that this matches your observations ? Yes that matches what I see. So 19 files is being in parallel by 19 SHD processes. I thought only one file is being healed at a time. Then what is the meaning of disperse.shd-max-threads parameter? If I set it to 2 then each SHD thread
2017 Sep 26
0
sparse files on EC volume
Hi Dmitri, On 22/09/17 17:07, Dmitri Chebotarov wrote: > > Hello > > I'm running some tests to compare performance between Gluster FUSE mount > and formated sparse files (located on the same Gluster FUSE mount). > > The Gluster volume is EC (same for both tests). > > I'm seeing HUGE difference and trying to figure out why. Could you explain what hardware
2017 Jun 01
0
Heal operation detail of EC volumes
>Is it possible that this matches your observations ? Yes that matches what I see. So 19 files is being in parallel by 19 SHD processes. I thought only one file is being healed at a time. Then what is the meaning of disperse.shd-max-threads parameter? If I set it to 2 then each SHD thread will heal two files at the same time? >How many IOPS can handle your bricks ? Bricks are 7200RPM NL-SAS
2017 Jul 31
0
Hot Tier
For the tier daemon to migrate the files for read, few performance translators have to be turned off. By default the performance quick-read and io-cache are turned on. You can turn them off so that the files will be migrated for read. On Mon, Jul 31, 2017 at 11:34 AM, Hari Gowtham <hgowtham at redhat.com> wrote: > Hi, > > If it was just reads then the tier daemon won't migrate
2017 Jun 01
3
Heal operation detail of EC volumes
Hi Serkan, On 30/05/17 10:22, Serkan ?oban wrote: > Ok I understand that heal operation takes place on server side. In > this case I should see X KB > out network traffic from 16 servers and 16X KB input traffic to the > failed brick server right? So that process will get 16 chunks > recalculate our chunk and write it to disk. That should be the normal operation for a single
2017 Sep 11
0
Slow performance of gluster volume
Hi Abi Can you please share your current transfer speeds after you made the change? Thank you. On Mon, Sep 11, 2017 at 9:55 AM, Ben Turner <bturner at redhat.com> wrote: > ----- Original Message ----- > > From: "Abi Askushi" <rightkicktech at gmail.com> > > To: "Ben Turner" <bturner at redhat.com> > > Cc: "Krutika Dhananjay"