search for: 46b4

Displaying 11 results from an estimated 11 matches for "46b4".

Did you mean: 464
2013 Jun 11
1
cluster.min-free-disk working?
...20% I thought new data would go to the two empty bricks, but gluster does not seem to honor the 20% limit. Have I missed something here? Thanks! /jon ***************gluster volume info************************ Volume Name: glusterKumiko Type: Distribute Volume ID: 8f639d0f-9099-46b4-b597-244d89def5bd Status: Started Number of Bricks: 4 Transport-type: tcp,rdma Bricks: Brick1: kumiko01:/mnt/raid6 Brick2: kumiko02:/mnt/raid6 Brick3: kumiko03:/mnt/raid6 Brick4: kumiko04:/mnt/raid6 Options Reconfigured: cluster.min-free-disk: 20% -------------- next part -----------...
2014 Apr 02
0
Trying to understand eager loading and accessing collections from within instance methods
...n email to rubyonrails-talk+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/Ez6ZCGd0@public.gmane.org To post to this group, send email to rubyonrails-talk-/JYPxA39Uh5TLH3MbocFF+G/Ez6ZCGd0@public.gmane.org To view this discussion on the web visit https://groups.google.com/d/msgid/rubyonrails-talk/77cb25cd-00b2-46b4-8657-ceec1d10b350%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
2017 Jun 04
2
Rebalance + VM corruption - current status and request for feedback
...n *dht.readdir-optimize=on >> --xlator-option *dht.rebalance-cmd=5 --xlator-option >> *dht.node-uuid=7c0bf49e-1ede-47b1-b9a5-bfde6e60f07b --xlator-option >> *dht.commit-hash=3376396580 <(337)%20639-6580> --socket-file >> /var/run/gluster/gluster-rebalance-801faefa-a583-46b4-8eef-e0ec160da9ea.sock >> --pid-file /var/lib/glusterd/vols/testvol/rebalance/7c0bf49e-1ede-47b1-b9a5-bfde6e60f07b.pid >> -l /var/log/glusterfs/testvol-rebalance.log) >> >> >> Could you upgrade all packages to 3.10.2 and try again? >> >> -Krutika >> &...
2017 Jun 06
2
Rebalance + VM corruption - current status and request for feedback
...cate*.entry-self-heal=off --xlator-option *dht.readdir-optimize=on > --xlator-option *dht.rebalance-cmd=5 --xlator-option > *dht.node-uuid=7c0bf49e-1ede-47b1-b9a5-bfde6e60f07b --xlator-option > *dht.commit-hash=3376396580 --socket-file /var/run/gluster/gluster- > rebalance-801faefa-a583-46b4-8eef-e0ec160da9ea.sock --pid-file > /var/lib/glusterd/vols/testvol/rebalance/7c0bf49e- > 1ede-47b1-b9a5-bfde6e60f07b.pid -l /var/log/glusterfs/testvol- > rebalance.log) > > > Could you upgrade all packages to 3.10.2 and try again? > > -Krutika > > > On Fri, May 26,...
2017 Jun 05
0
Rebalance + VM corruption - current status and request for feedback
...timize=on --xlator-option *dht.rebalance-cmd=5 >>> --xlator-option *dht.node-uuid=7c0bf49e-1ede-47b1-b9a5-bfde6e60f07b >>> --xlator-option *dht.commit-hash=3376396580 <(337)%20639-6580> >>> --socket-file /var/run/gluster/gluster-rebal >>> ance-801faefa-a583-46b4-8eef-e0ec160da9ea.sock --pid-file >>> /var/lib/glusterd/vols/testvol/rebalance/7c0bf49e-1ede-47b1-b9a5-bfde6e60f07b.pid >>> -l /var/log/glusterfs/testvol-rebalance.log) >>> >>> >>> Could you upgrade all packages to 3.10.2 and try again? >>> >...
2017 Jun 05
1
Rebalance + VM corruption - current status and request for feedback
...or-option *dht.rebalance-cmd=5 >>>> --xlator-option *dht.node-uuid=7c0bf49e-1ede-47b1-b9a5-bfde6e60f07b >>>> --xlator-option *dht.commit-hash=3376396580 <(337)%20639-6580> >>>> --socket-file /var/run/gluster/gluster-rebal >>>> ance-801faefa-a583-46b4-8eef-e0ec160da9ea.sock --pid-file >>>> /var/lib/glusterd/vols/testvol/rebalance/7c0bf49e-1ede-47b1-b9a5-bfde6e60f07b.pid >>>> -l /var/log/glusterfs/testvol-rebalance.log) >>>> >>>> >>>> Could you upgrade all packages to 3.10.2 and try agai...
2017 Jun 06
0
Rebalance + VM corruption - current status and request for feedback
...-xlator-option *replicate*.entry-self-heal=off --xlator-option *dht.readdir-optimize=on --xlator-option *dht.rebalance-cmd=5 --xlator-option *dht.node-uuid=7c0bf49e-1ede-47b1-b9a5-bfde6e60f07b --xlator-option *dht.commit-hash=3376396580 --socket-file /var/run/gluster/gluster-rebalance-801faefa-a583-46b4-8eef-e0ec160da9ea.sock --pid-file /var/lib/glusterd/vols/testvol/rebalance/7c0bf49e-1ede-47b1-b9a5-bfde6e60f07b.pid -l /var/log/glusterfs/testvol-rebalance.log) Could you upgrade all packages to 3.10.2 and try again? -Krutika On Fri, May 26, 2017 at 4:46 PM, Mahdi Adnan <mahdi.adnan at outl...
2017 Jun 06
0
Rebalance + VM corruption - current status and request for feedback
...n *dht.readdir-optimize=on >> --xlator-option *dht.rebalance-cmd=5 --xlator-option >> *dht.node-uuid=7c0bf49e-1ede-47b1-b9a5-bfde6e60f07b --xlator-option >> *dht.commit-hash=3376396580 <(337)%20639-6580> --socket-file >> /var/run/gluster/gluster-rebalance-801faefa-a583-46b4-8eef-e0ec160da9ea.sock >> --pid-file /var/lib/glusterd/vols/testvol/rebalance/7c0bf49e-1ede-47b1-b9a5-bfde6e60f07b.pid >> -l /var/log/glusterfs/testvol-rebalance.log) >> >> >> Could you upgrade all packages to 3.10.2 and try again? >> >> -Krutika >> &...
2017 Oct 26
0
not healing one file
Hey Richard, Could you share the following informations please? 1. gluster volume info <volname> 2. getfattr output of that file from all the bricks getfattr -d -e hex -m . <brickpath/filepath> 3. glustershd & glfsheal logs Regards, Karthik On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote: > On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it does diagnose any issues in setup. Currently you may have to run it in all the three machines. On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote: > Thanks for this report. This week many of the developers are at Gluster > Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...27:afr_log_selfheal] 0-home-replicate-0: Completed data selfheal on 2047eb52-6715-43d0-b32e-ca1acce2f18e. sources=0 [2] sinks=1 [2017-10-25 10:40:18.639030] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-home-replicate-0: performing metadata selfheal on 5779ef76-e253-46b4-a364-12e4fb6222ee [2017-10-25 10:40:18.642114] I [MSGID: 108026] [afr-self-heal-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed metadata selfheal on 5779ef76-e253-46b4-a364-12e4fb6222ee. sources=0 [2] sinks=1 [2017-10-25 10:40:18.651810] I [MSGID: 108026] [afr-self-heal-common.c:132...