similar to: single brick logging errors endlessly

Displaying 20 results from an estimated 10000 matches similar to: "single brick logging errors endlessly"

2013 Jan 03
0
Resolve brick failed in restore
Hi, I have a lab with 10 machines acting as storage servers for some compute machines, using glusterfs to distribute the data as two volumes. Created using: gluster volume create vol1 192.168.10.{221..230}:/data/vol1 gluster volume create vol2 replica 2 192.168.10.{221..230}:/data/vol2 and mounted on the client and server machines using: mount -t glusterfs 192.168.10.221:/vol1 /mnt/vol1 mount
2018 Feb 25
0
Re-adding an existing brick to a volume
.gluster and attr already in that folder so it would not connect it as a brick I don't think there is option to "reconnect brick back" what I did many times - delete .gluster and reset attr on the folder, connect the brick and then update those attr. with stat commands example here http://lists.gluster.org/pipermail/gluster-users/2018-January/033352.html Vlad On Sun, Feb 25, 2018
2018 Feb 25
1
Re-adding an existing brick to a volume
Let me see if I understand this. Remove attrs from the brick and delete the .glusterfs folder. Data stays in place. Add the brick to the volume. Since most of the data is the same as on the actual volume it does not need to be synced, and the heal operation finishes much faster. Do I have this right? Kind regards, Mitja On 25/02/2018 17:02, Vlad Kopylov wrote: > .gluster and attr already in
2018 Mar 13
0
Can't heal a volume: "Please check if all brick processes are running."
Hi, Maybe someone can point me to a documentation or explain this? I can't find it myself. Do we have any other useful resources except doc.gluster.org? As I see many gluster options are not described there or there are no explanation what is doing... On 2018-03-12 15:58, Anatoliy Dmytriyev wrote: > Hello, > > We have a very fresh gluster 3.10.10 installation. > Our volume
2018 Feb 25
2
Re-adding an existing brick to a volume
Hi! I am running a replica 3 volume. On server2 I wanted to move the brick to a new disk. I removed the brick from the volume: gluster volume remove-brick VOLUME rep 2 server2:/gluster/VOLUME/brick0/brick force I unmounted the old brick and mounted the new disk to the same location. I added the empty new brick to the volume: gluster volume add-brick VOLUME rep 3
2017 Oct 24
0
brick is down but gluster volume status says it's fine
On Tue, Oct 24, 2017 at 11:13 PM, Alastair Neil <ajneil.tech at gmail.com> wrote: > gluster version 3.10.6, replica 3 volume, daemon is present but does not > appear to be functioning > > peculiar behaviour. If I kill the glusterfs brick daemon and restart > glusterd then the brick becomes available - but one of my other volumes > bricks on the same server goes down in
2017 Nov 28
0
move brick to new location
Hello everybody, we have a number of "replica 3 arbiter 1" or (2 + 1) volumes because we're running out of space on some volumes I need to optimize the usage of the physical disks. that means I want to consolidate volumes with low usage onto the same physical disk. I can do it with "replace-brick commit force" but that looks a bit drastic to me because it immediately drops
2017 Nov 15
0
Help with reconnecting a faulty brick
On 11/15/2017 12:54 PM, Daniel Berteaud wrote: > > > > Le 13/11/2017 ? 21:07, Daniel Berteaud a ?crit?: >> >> Le 13/11/2017 ? 10:04, Daniel Berteaud a ?crit?: >>> >>> Could I just remove the content of the brick (including the >>> .glusterfs directory) and reconnect ? >>> >> If it is only the brick that is faulty on the bad node,
2018 Mar 13
0
Can't heal a volume: "Please check if all brick processes are running."
Can we add a smarter error message for this situation by checking volume type first? Cheers, Laura B On Wednesday, March 14, 2018, Karthik Subrahmanya <ksubrahm at redhat.com> wrote: > Hi Anatoliy, > > The heal command is basically used to heal any mismatching contents > between replica copies of the files. > For the command "gluster volume heal <volname>"
2017 Jul 10
0
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Fri, Jul 7, 2017 at 2:23 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com> wrote: > On Thu, Jul 6, 2017 at 3:22 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com > > wrote: > >> On Thu, Jul 6, 2017 at 2:16 PM, Atin Mukherjee <amukherj at redhat.com> >> wrote: >> >>> >>> >>> On Thu, Jul 6, 2017 at 5:26 PM, Gianluca Cecchi
2017 Nov 13
2
Help with reconnecting a faulty brick
Hi everyone. I'm running a simple Gluster setup like this: ? * Replicate 2x1 ? * Only 2 nodes, with one brick each ? * Nodes are CentOS 7.0, uising GlusterFS 3.5.3 (yes, I know it's old, I just can't upgrade right now) No sharding or anything "fancy". This Gluster volume is used to host VM images, and are used by both nodes (which are gluster server and clients).
2018 Mar 12
2
Can't heal a volume: "Please check if all brick processes are running."
Hello, We have a very fresh gluster 3.10.10 installation. Our volume is created as distributed volume, 9 bricks 96TB in total (87TB after 10% of gluster disk space reservation) For some reasons I can?t ?heal? the volume: # gluster volume heal gv0 Launching heal operation to perform index self heal on volume gv0 has been unsuccessful on bricks that are down. Please check if all brick processes
2018 Apr 12
0
issues with replicating data to a new brick
Hello everybody, I have some kind of a situation here I want to move some volumes to new hosts. the idea is to add the new bricks to the volume, sync and then drop the old bricks. starting point is: Volume Name: Server_Monthly_02 Type: Replicate Volume ID: 0ada8e12-15f7-42e9-9da3-2734b04e04e9 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1:
2017 Dec 12
0
reset-brick command questions
Hi Jorick, 1 - Why would I even need to specify the " HOSTNAME:BRICKPATH " twice? I just want to replace the disk and get it back into the volume. Reset brick command can be used in different scenarios. One more case could be where you just want to change the host name to IP address of that node of bricks. In this case also you will follow the same steps but just have to provide IP
2017 Jun 19
0
different brick using the same port?
On Sun, Jun 18, 2017 at 1:40 PM, Yong Zhang <hiscal at outlook.com> wrote: > Hi, all > > > > I found two of my bricks from different volumes are using the same port > 49154 on the same glusterfs server node, is this normal? > No it's not. Can you please help me with the following information: 1. gluster --version 2. glusterd log & cmd_history logs from both
2018 Mar 14
0
Can't heal a volume: "Please check if all brick processes are running."
Hi Karthik, Thanks a lot for the explanation. Does it mean a distributed volume health can be checked only by "gluster volume status " command? And one more question: cluster.min-free-disk is 10% by default. What kind of "side effects" can we face if this option will be reduced to, for example, 5%? Could you point to any best practice document(s)? Regards, Anatoliy
2017 Jul 07
0
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
You'd need to allow some more time to dig into the logs. I'll try to get back on this by Monday. On Fri, Jul 7, 2017 at 2:23 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com> wrote: > On Thu, Jul 6, 2017 at 3:22 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com > > wrote: > >> On Thu, Jul 6, 2017 at 2:16 PM, Atin Mukherjee <amukherj at redhat.com>
2017 Jun 19
1
different brick using the same port?
Isn't this just brick multiplexing? On June 19, 2017 5:55:54 AM PDT, Atin Mukherjee <amukherj at redhat.com> wrote: >On Sun, Jun 18, 2017 at 1:40 PM, Yong Zhang <hiscal at outlook.com> wrote: > >> Hi, all >> >> >> >> I found two of my bricks from different volumes are using the same >port >> 49154 on the same glusterfs server node, is
2017 Dec 12
0
Impossible to add new brick
Dear all, I would like to add a new brick to the running Gluster volume. I have 3 bricks in 3 different servers. Everything is fine on the running system. Connection to the application on the filesystem is ok. All 3 server have a load between 0.5 and 1.5 on top. Iotop is idle and iftop output is between 1 and 5 MB traffic. Now I plan to add a new brick to the volume based on a LVM to create
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Wed, Jul 5, 2017 at 6:39 PM, Atin Mukherjee <amukherj at redhat.com> wrote: > OK, so the log just hints to the following: > > [2017-07-05 15:04:07.178204] E [MSGID: 106123] [glusterd-mgmt.c:1532:glusterd_mgmt_v3_commit] > 0-management: Commit failed for operation Reset Brick on local node > [2017-07-05 15:04:07.178214] E [MSGID: 106123] >