Displaying 14 results from an estimated 14 matches for "4e7a".
2018 May 22
1
split brain? but where?
...id doesnt show up,
8><---
[root at glusterp2 fb]# pwd
/bricks/brick1/gv0/.glusterfs/ea/fb
[root at glusterp2 fb]# ls -al
total 3130892
drwx------. 2 root root 64 May 22 13:01 .
drwx------. 4 root root 24 May 8 14:27 ..
-rw-------. 1 root root 3294887936 May 4 11:07
eafb8799-4e7a-4264-9213-26997c5a4693
-rw-r--r--. 1 root root 1396 May 22 13:01 gfid.run
so the gfid seems large....but du cant see it...
[root at glusterp2 fb]# du -a /bricks/brick1/gv0 | sort -n -r | head -n 10
275411712 /bricks/brick1/gv0
275411696 /bricks/brick1/gv0/.glusterfs
22484988 /bricks/brick1/...
2018 May 21
2
split brain? but where?
...01
trusted.glusterfs.dht=0x000000010000000000000000ffffffff
trusted.glusterfs.volume-id=0xcfceb3535f0e4cf18b533ccfb1f091d3
root at salt-001:~# salt gluster* cmd.run 'gluster volume heal gv0 info'
glusterp2.graywitch.co.nz:
Brick glusterp1:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
Status: Connected
Number of entries: 1
Brick glusterp2:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
Status: Connected
Number of entries: 1
Brick glusterp3:/bricks/brick1/gv0...
2018 May 22
2
split brain? but where?
...which is in split brain.
The steps to recover from split-brain can be found here,
http://gluster.readthedocs.io/en/latest/Troubleshooting/resolving-splitbrain/
HTH,
Karthik
On Tue, May 22, 2018 at 4:03 AM, Joe Julian <joe at julianfamily.org> wrote:
> How do I find what "eafb8799-4e7a-4264-9213-26997c5a4693" is?
>
> https://docs.gluster.org/en/v3/Troubleshooting/gfid-to-path/
>
> On May 21, 2018 3:22:01 PM PDT, Thing <thing.thing at gmail.com> wrote:
> >Hi,
> >
> >I seem to have a split brain issue, but I cannot figure out where this
>...
2018 May 22
0
split brain? but where?
I tried this already.
8><---
[root at glusterp2 fb]# find /bricks/brick1/gv0 -samefile
/bricks/brick1/gv0/.glusterfs/ea/fb/eafb8799-4e7a-4264-9213-26997c5a4693
/bricks/brick1/gv0/.glusterfs/ea/fb/eafb8799-4e7a-4264-9213-26997c5a4693
[root at glusterp2 fb]#
8><---
gluster 4
Centos 7.4
8><---
df -h
[root at glusterp2 fb]# df -h
Filesystem Size Used Avail
Use% Mounted on
/dev...
2018 May 21
0
split brain? but where?
How do I find what "eafb8799-4e7a-4264-9213-26997c5a4693" is?
https://docs.gluster.org/en/v3/Troubleshooting/gfid-to-path/
On May 21, 2018 3:22:01 PM PDT, Thing <thing.thing at gmail.com> wrote:
>Hi,
>
>I seem to have a split brain issue, but I cannot figure out where this
>is
>and what it is, can someo...
2018 May 10
2
broken gluster config
...king at
http://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/heal-info-and-split-brain-resolution/
I cant determine what file? dir gvo? is actually the issue.
[root at glusterp1 gv0]# gluster volume heal gv0 info split-brain
Brick glusterp1:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693>
Status: Connected
Number of entries in split-brain: 1
Brick glusterp2:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693>
Status: Connected
Number of entries in split-brain: 1
Brick glusterp3:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4...
2018 May 10
0
broken gluster config
[trying to read,
I cant understand what is wrong?
root at glusterp1 gv0]# gluster volume heal gv0 info
Brick glusterp1:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
Status: Connected
Number of entries: 1
Brick glusterp2:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
Status: Connected
Number of entries: 1
Brick glusterp3:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-92...
2018 May 10
0
broken gluster config
also I have this "split brain"?
[root at glusterp1 gv0]# gluster volume heal gv0 info
Brick glusterp1:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
Status: Connected
Number of entries: 1
Brick glusterp2:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
/glusterp1/images/centos-server-001.qcow2
/glusterp1/images/kubernetes-template.qcow2
/glusterp1/images/k...
2018 May 10
2
broken gluster config
[root at glusterp1 gv0]# !737
gluster v status
Status of volume: gv0
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick glusterp1:/bricks/brick1/gv0 49152 0 Y
5229
Brick glusterp2:/bricks/brick1/gv0 49152 0 Y
2054
Brick
2011 Feb 08
10
mkfs.btrfs - error checking /dev/sda5 mount status
Hi,
I''m hitting this issue - sda5 is a normal device, nothing to do with
loop, encryption etc.
# mkfs.btrfs /dev/sda5
WARNING! - Btrfs v0.19-35-g1b444cd-dirty IS EXPERIMENTAL
WARNING! - see http://btrfs.wiki.kernel.org before using
error checking /dev/sda5 mount status
Is there something I can do to resolve this?
Thank you
Lubos
--
To unsubscribe from this list: send the line
2020 Jun 24
2
Target specific named address spaces
Hi,
Is there a way to implement named address spaces with clang/llvm as it is
possible with gcc ?
We would like to have our own named address space that would be recognized
by the frontend.
Thanks in advance!
Regards,
Sebastien
2017 Oct 26
0
not healing one file
Hey Richard,
Could you share the following informations please?
1. gluster volume info <volname>
2. getfattr output of that file from all the bricks
getfattr -d -e hex -m . <brickpath/filepath>
3. glustershd & glfsheal logs
Regards,
Karthik
On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote:
> On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it
does diagnose any issues in setup. Currently you may have to run it in all
the three machines.
On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote:
> Thanks for this report. This week many of the developers are at Gluster
> Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...933E351EB66728F (6e099dad-e8d2-4cba-a7a3-376476920749) on home-client-2
[2017-10-25 10:14:19.493061] W [MSGID: 108015] [afr-self-heal-entry.c:56:afr_selfheal_entry_delete] 0-home-replicate-0: expunging file a3f5a769-8859-48e3-96ca-60a988eb9358/1AD581330A74BE9A9EADB779E6508565A07344A9 (8bea27a3-13d9-4e7a-9110-e75d8325a14a) on home-client-2
[2017-10-25 10:14:19.518630] W [MSGID: 108015] [afr-self-heal-entry.c:56:afr_selfheal_entry_delete] 0-home-replicate-0: expunging file a3f5a769-8859-48e3-96ca-60a988eb9358/5FE8871C297722896E4B87647E4137F7AD15038E (8fe34aaf-dc9d-4f02-a681-0588c244f7d0) on home-cli...