search for: 4a4d

Displaying 11 results from an estimated 11 matches for "4a4d".

Did you mean: 0a4d
2017 Sep 13
3
one brick one volume process dies?
...gt; <mailto:Gluster-users at gluster.org> > http://lists.gluster.org/mailman/listinfo/gluster-users > <http://lists.gluster.org/mailman/listinfo/gluster-users> > > hi, here: $ gluster vol info C-DATA Volume Name: C-DATA Type: Replicate Volume ID: 18ffba73-532e-4a4d-84da-fceea52f8c2e Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA Brick2: 10.5.6.100:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA Brick3: 10.5.6.32:/__.aLocalStorages/0/0-GLUSTERs/0GLUS...
2017 Sep 13
0
one brick one volume process dies?
...gt; http://lists.gluster.org/mailman/listinfo/gluster-users >> <http://lists.gluster.org/mailman/listinfo/gluster-users> >> >> >> > hi, here: > > $ gluster vol info C-DATA > > Volume Name: C-DATA > Type: Replicate > Volume ID: 18ffba73-532e-4a4d-84da-fceea52f8c2e > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x 3 = 3 > Transport-type: tcp > Bricks: > Brick1: 10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA > Brick2: 10.5.6.100:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA > Brick3: 10.5.6.32:...
2017 Sep 13
2
one brick one volume process dies?
...tinfo/gluster-users >>> <http://lists.gluster.org/mailman/listinfo/gluster-users> >>> >>> >>> >> hi, here: >> >> $ gluster vol info C-DATA >> >> Volume Name: C-DATA >> Type: Replicate >> Volume ID: 18ffba73-532e-4a4d-84da-fceea52f8c2e >> Status: Started >> Snapshot Count: 0 >> Number of Bricks: 1 x 3 = 3 >> Transport-type: tcp >> Bricks: >> Brick1: 10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA >> Brick2: 10.5.6.100:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-...
2015 Aug 13
1
Host and Guest UUID ?
Hi, Are there some kind of UUIDs for Host and Guest ? If yes , how may I retrieve them programmatically ? My goal is to trace GUEST migrations. Thx for help. Regards, J.P. Ribeauville P: +33.(0).1.47.17.27.87 Puteaux 3 Etage 5 Bureau 4 jpribeauville@axway.com<mailto:jpribeauville@axway.com> http://www.axway.com<http://www.axway.com/> P Pensez à l'environnement avant
2017 Sep 13
0
one brick one volume process dies?
...;http://lists.gluster.org/mailman/listinfo/gluster-users> >>>> >>>> >>>> >>> hi, here: >>> >>> $ gluster vol info C-DATA >>> >>> Volume Name: C-DATA >>> Type: Replicate >>> Volume ID: 18ffba73-532e-4a4d-84da-fceea52f8c2e >>> Status: Started >>> Snapshot Count: 0 >>> Number of Bricks: 1 x 3 = 3 >>> Transport-type: tcp >>> Bricks: >>> Brick1: 10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA >>> Brick2: 10.5.6.100:/__.aLocalStora...
2017 Sep 28
1
one brick one volume process dies?
...ers > <http://lists.gluster.org/mailman/listinfo/gluster-users>> > > > > hi, here: > > $ gluster vol info C-DATA > > Volume Name: C-DATA > Type: Replicate > Volume ID: 18ffba73-532e-4a4d-84da-fceea52f8c2e > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x 3 = 3 > Transport-type: tcp > Bricks: > Brick1: > 10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA >...
2017 Sep 13
0
one brick one volume process dies?
Please provide the output of gluster volume info, gluster volume status and gluster peer status. Apart from above info, please provide glusterd logs, cmd_history.log. Thanks Gaurav On Tue, Sep 12, 2017 at 2:22 PM, lejeczek <peljasz at yahoo.co.uk> wrote: > hi everyone > > I have 3-peer cluster with all vols in replica mode, 9 vols. > What I see, unfortunately, is one brick
2017 Sep 12
2
one brick one volume process dies?
hi everyone I have 3-peer cluster with all vols in replica mode, 9 vols. What I see, unfortunately, is one brick fails in one vol, when it happens it's always the same vol on the same brick. Command: gluster vol status $vol - would show brick not online. Restarting glusterd with systemclt does not help, only system reboot seem to help, until it happens, next time. How to troubleshoot this
2017 Oct 26
0
not healing one file
Hey Richard, Could you share the following informations please? 1. gluster volume info <volname> 2. getfattr output of that file from all the bricks getfattr -d -e hex -m . <brickpath/filepath> 3. glustershd & glfsheal logs Regards, Karthik On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote: > On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it does diagnose any issues in setup. Currently you may have to run it in all the three machines. On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote: > Thanks for this report. This week many of the developers are at Gluster > Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...B9CED4D6C915794 (4da0c5b8-40a0-4e1a-82fc-ac7d946a9523) on home-client-2 [2017-10-25 10:13:40.170859] W [MSGID: 108015] [afr-self-heal-entry.c:56:afr_selfheal_entry_delete] 0-home-replicate-0: expunging file a3f5a769-8859-48e3-96ca-60a988eb9358/274AD63E7A45CA4A1F0FFC972A8BED058280C2FC (97467e4b-dd26-4a4d-b1cd-802e331673da) on home-client-2 [2017-10-25 10:13:40.209229] W [MSGID: 108015] [afr-self-heal-entry.c:56:afr_selfheal_entry_delete] 0-home-replicate-0: expunging file a3f5a769-8859-48e3-96ca-60a988eb9358/DF1744069B0891594A92F939E6647DD5F6DEBEC9 (fd52497c-4216-4ee0-90ee-a0f41a72f599) on home-cli...