similar to: Replace Brick failed

Displaying 20 results from an estimated 50000 matches similar to: "Replace Brick failed"

2013 Aug 08
2
not able to restart the brick for distributed volume
Hi All, I am facing issues restarting the gluster volume. When I start the volume after stopping it, the gluster fails to start the volume Below is the message that I get on CLI /root> gluster volume start _home volume start: _home: failed: Commit failed on localhost. Please check the log file for more details. Logs says that it was unable to start the brick [2013-08-08
2013 Jun 03
2
recovering gluster volume || startup failure
Hello Gluster users: sorry for long post, I have run out of ideas here, kindly let me know if i am looking at right places for logs and any suggested actions.....thanks a sudden power loss casued hard reboot - now the volume does not start Glusterfs- 3.3.1 on Centos 6.1 transport: TCP sharing volume over NFS for VM storage - VHD Files Type: distributed - only 1 node (brick) XFS (LVM)
2012 Oct 19
1
copying failed once brick-replace is starting
hi, all when I`m copying a file (about 1GB) into the volume, I tried to replace one brick of the volume. Then the copying was halted with the error msg: "cp: writing "./d", transport endpoint is not connected". Besides, I use two server bricks without afr. I`m not sure, is it a bug, or a yet-to-be-added feature of glusterfs? Best Regards. Jules Wang.
2011 Sep 16
2
Can't replace dead peer/brick
I have a simple setup: gluster> volume info Volume Name: myvolume Type: Distributed-Replicate Status: Started Number of Bricks: 3 x 2 = 6 Transport-type: tcp Bricks: Brick1: 10.2.218.188:/srv Brick2: 10.116.245.136:/srv Brick3: 10.206.38.103:/srv Brick4: 10.114.41.53:/srv Brick5: 10.68.73.41:/srv Brick6: 10.204.129.91:/srv I *killed* Brick #4 (kill -9 and then shut down instance). My
2019 Jun 11
1
Proper command for replace-brick on distribute–replicate?
Dear list, In a recent discussion on this list Ravi suggested that the documentation for replace-brick? was out of date. For a distribute?replicate volume the documentation currently says that we need to kill the old brick's PID, create a temporary empty directory on the FUSE mount, check the xattrs, replace-brick with commit force. Is all this still necessary? I'm running Gluster 5.6 on
2019 Jun 12
1
Proper command for replace-brick on distribute–replicate?
On 12/06/19 1:38 PM, Alan Orth wrote: > Dear Ravi, > > Thanks for the confirmation?I replaced a brick in a volume last night > and by the morning I see that Gluster has replicated data there, > though I don't have any indication of its progress. The `gluster v > heal volume info` and `gluster v heal volume info split-brain` are all > looking good so I guess that's
2011 Aug 12
2
Replace brick of a dead node
Hi! Seeking pardon from the experts, but I have a basic usage question that I could not find a straightforward answer to. I have a two node cluster, with two bricks replicated, one on each node. Lets say one of the node dies and is unreachable. I want to be able to spin a new node and replace the dead node's brick to a location on the new node. The command 'gluster volume
2011 Jun 20
1
Per-directory brick preference?
Hi, I operate a distributed replicated (1:2) setup that looks like this: server1:bigdisk,server1:smalldisk,server2:bigdisk,server2:smalldisk replica sets are bigdisk-bigdisk and smalldisk-smalldisk. This setup will be extended by another set of four bricks (same setup) within the next few days, and I could make those into another volume entirely, but I'd prefer not to, leaving me with more
2018 Feb 01
2
How to trigger a resync of a newly replaced empty brick in replicate config ?
Hi, My volume home is configured in replicate mode (version 3.12.4) with the bricks server1:/data/gluster/brick1 server2:/data/gluster/brick1 server2:/data/gluster/brick1 was corrupted, so I killed gluster daemon for that brick on server2, umounted it, reformated it, remounted it and did a > gluster volume reset-brick home server2:/data/gluster/brick1 server2:/data/gluster/brick1 commit
2018 Feb 01
0
How to trigger a resync of a newly replaced empty brick in replicate config ?
You do not need to reset brick if brick path does not change. Replace the brick format and mount, then gluster v start volname force. To start self heal just run gluster v heal volname full. On Thu, Feb 1, 2018 at 6:39 PM, Alessandro Ipe <Alessandro.Ipe at meteo.be> wrote: > Hi, > > > My volume home is configured in replicate mode (version 3.12.4) with the bricks >
2013 Jan 03
2
"Failed to perform brick order check,..."
Hi guys: I have just installed gluster on a single instance, and the command: gluster volume create gv0 replica 2 server.n1:/export/brick1 server.n1:/export/brick2 returns with: "Failed to perform brick order check... do you want to continue ..? y/N"? What is the meaning of this error message, and why does brick order matter? -- Jay Vyas http://jayunit100.blogspot.com
2018 Feb 02
1
How to trigger a resync of a newly replaced empty brick in replicate config ?
Hi, I simplified the config in my first email, but I actually have 2x4 servers in replicate-distribute with each 4 bricks for 6 of them and 2 bricks for the remaining 2. Full healing will just take ages... for a just single brick to resync ! > gluster v status home volume status home Status of volume: home Gluster process TCP Port RDMA Port Online Pid
2013 Dec 06
1
replace-brick failing - transport.address-family not specified
Hello, I have what I think is a fairly basic Gluster setup, however when I try to carry out a replace-brick operation it consistently fails... Here are the command line options: root at osh1:~# gluster volume info media Volume Name: media Type: Replicate Volume ID: 4c290928-ba1c-4a45-ac05-85365b4ea63a Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1:
2018 Feb 25
2
Re-adding an existing brick to a volume
Hi! I am running a replica 3 volume. On server2 I wanted to move the brick to a new disk. I removed the brick from the volume: gluster volume remove-brick VOLUME rep 2 server2:/gluster/VOLUME/brick0/brick force I unmounted the old brick and mounted the new disk to the same location. I added the empty new brick to the volume: gluster volume add-brick VOLUME rep 3
2018 Feb 25
0
Re-adding an existing brick to a volume
.gluster and attr already in that folder so it would not connect it as a brick I don't think there is option to "reconnect brick back" what I did many times - delete .gluster and reset attr on the folder, connect the brick and then update those attr. with stat commands example here http://lists.gluster.org/pipermail/gluster-users/2018-January/033352.html Vlad On Sun, Feb 25, 2018
2018 Feb 25
1
Re-adding an existing brick to a volume
Let me see if I understand this. Remove attrs from the brick and delete the .glusterfs folder. Data stays in place. Add the brick to the volume. Since most of the data is the same as on the actual volume it does not need to be synced, and the heal operation finishes much faster. Do I have this right? Kind regards, Mitja On 25/02/2018 17:02, Vlad Kopylov wrote: > .gluster and attr already in
2018 Mar 13
0
Can't heal a volume: "Please check if all brick processes are running."
Hi, Maybe someone can point me to a documentation or explain this? I can't find it myself. Do we have any other useful resources except doc.gluster.org? As I see many gluster options are not described there or there are no explanation what is doing... On 2018-03-12 15:58, Anatoliy Dmytriyev wrote: > Hello, > > We have a very fresh gluster 3.10.10 installation. > Our volume
2018 Mar 12
2
Can't heal a volume: "Please check if all brick processes are running."
Hello, We have a very fresh gluster 3.10.10 installation. Our volume is created as distributed volume, 9 bricks 96TB in total (87TB after 10% of gluster disk space reservation) For some reasons I can?t ?heal? the volume: # gluster volume heal gv0 Launching heal operation to perform index self heal on volume gv0 has been unsuccessful on bricks that are down. Please check if all brick processes
2017 Dec 11
2
reset-brick command questions
Hi, I'm trying to use the reset-brick command, but it's not completely clear to me > > Introducing reset-brick command > > /Notes for users:/ The reset-brick command provides support to > reformat/replace the disk(s) represented by a brick within a volume. > This is helpful when a disk goes bad etc > That's what I need, the use case is a disk goes bad on
2017 Dec 12
0
reset-brick command questions
Hi Jorick, 1 - Why would I even need to specify the " HOSTNAME:BRICKPATH " twice? I just want to replace the disk and get it back into the volume. Reset brick command can be used in different scenarios. One more case could be where you just want to change the host name to IP address of that node of bricks. In this case also you will follow the same steps but just have to provide IP