similar to: Replace brick of a dead node

Displaying 20 results from an estimated 10000 matches similar to: "Replace brick of a dead node"

2011 Sep 16
2
Can't replace dead peer/brick
I have a simple setup: gluster> volume info Volume Name: myvolume Type: Distributed-Replicate Status: Started Number of Bricks: 3 x 2 = 6 Transport-type: tcp Bricks: Brick1: 10.2.218.188:/srv Brick2: 10.116.245.136:/srv Brick3: 10.206.38.103:/srv Brick4: 10.114.41.53:/srv Brick5: 10.68.73.41:/srv Brick6: 10.204.129.91:/srv I *killed* Brick #4 (kill -9 and then shut down instance). My
2012 Nov 14
1
Howto find out volume topology
Hello, I would like to find out the topology of an existing volume. For example, if I have a distributed replicated volume, what bricks are the replication partners? Fred -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20121114/b203ea91/attachment.html>
2011 Jun 21
2
GlusterFS 3.1.5 now available
If you haven't seen it already, GlusterFS 3.1.5 is now available at http://www.gluster.org/download/ For those of you currently on the 3.1.x series, we recommend that you upgrade to this latest release. Here are some issues fixed in this release: Bug 2294: Fixed the issue occurred during creating and sharing of volumes with both RDMA and TCP/IP transport type. Bug 2522: Fixed the issue of
2008 Dec 20
14
building 1.4.0rc6
I am trying to build the latest release candidate and have run into a bit of a problem. When I run ./configure, I get: GlusterFS configure summary =========================== FUSE client : no Infiniband verbs : no epoll IO multiplex : yes Berkeley-DB : no libglusterfsclient : yes mod_glusterfs : no () argp-standalone : no I am going to need the gluster FUSE client now
2012 Jan 05
1
Can't stop or delete volume
Hi, I can't stop or delete a replica volume: # gluster volume info Volume Name: sync1 Type: Replicate Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: thinkpad:/gluster/export Brick2: quad:/raid/gluster/export # gluster volume stop sync1 Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y Volume sync1 does not exist # gluster volume
2011 Nov 07
2
Gluster/RDMA
To Harry Mangalam about Gluster/RDMA: make sure these modules are loaded # modprobe -v rdma_ucm # modprobe -v ib_uverbs # modprobe -v ib_ucm To run the subnet manager # modprobe -v ib_umad Make sure libibverbs and (libmlx4 or libmthca) RPMs are installed. I don't understand why they appropriate modules aren't loaded automatically. Could put something in /etc/modprobe.d/ to make this
2012 Jan 22
2
Best practices?
Suppose I start building nodes with (say) 24 drives each in them. Would the standard/recommended approach be to make each drive its own filesystem, and export 24 separate bricks, server1:/data1 .. server1:/data24 ? Making a distributed replicated volume between this and another server would then have to list all 48 drives individually. At the other extreme, I could put all 24 drives into some
2017 Dec 21
3
Wrong volume size with df
Sure! > 1 - output of gluster volume heal <volname> info Brick pod-sjc1-gluster1:/data/brick1/gv0 Status: Connected Number of entries: 0 Brick pod-sjc1-gluster2:/data/brick1/gv0 Status: Connected Number of entries: 0 Brick pod-sjc1-gluster1:/data/brick2/gv0 Status: Connected Number of entries: 0 Brick pod-sjc1-gluster2:/data/brick2/gv0 Status: Connected Number of entries: 0 Brick
2017 Dec 11
2
reset-brick command questions
Hi, I'm trying to use the reset-brick command, but it's not completely clear to me > > Introducing reset-brick command > > /Notes for users:/ The reset-brick command provides support to > reformat/replace the disk(s) represented by a brick within a volume. > This is helpful when a disk goes bad etc > That's what I need, the use case is a disk goes bad on
2018 Jan 02
0
Wrong volume size with df
For what it's worth here, after I added a hot tier to the pool, the brick sizes are now reporting the correct size of all bricks combined instead of just one brick. Not sure if that gives you any clues for this... maybe adding another brick to the pool would have a similar effect? On Thu, Dec 21, 2017 at 11:44 AM, Tom Fite <tomfite at gmail.com> wrote: > Sure! > > > 1 -
2018 Mar 04
1
tiering
Hi, Have a glusterfs 3.10.10 (tried 3.12.6 as well) volume on Ubuntu 16.04 with a 3 ssd tier where one ssd is bad. Status of volume: labgreenbin Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Hot Bricks: Brick labgfs81:/gfs/p1-tier/mount 49156 0 Y 4217 Brick
2017 Dec 19
3
Wrong volume size with df
I have a glusterfs setup with distributed disperse volumes 5 * ( 4 + 2 ). After a server crash, "gluster peer status" reports all peers as connected. "gluster volume status detail" shows that all bricks are up and running with the right size, but when I use df from a client mount point, the size displayed is about 1/6 of the total size. When browsing the data, they seem to
2018 Mar 16
2
Disperse volume recovery and healing
Xavi, does that mean that even if every node was rebooted one at a time even without issuing a heal that the volume would have no issues after running gluster volume heal [volname] when all bricks are back online? ________________________________ From: Xavi Hernandez <jahernan at redhat.com> Sent: Thursday, March 15, 2018 12:09:05 AM To: Victor T Cc: gluster-users at gluster.org Subject:
2017 Dec 21
0
Wrong volume size with df
Could youplease provide following - 1 - output of gluster volume heal <volname> info 2 - /var/log/glusterfs - provide log file with mountpoint-volumename.log 3 - output of gluster volume <volname> info 4 - output of gluster volume <volname> status 5 - Also, could you try unmount the volume and mount it again and check the size? ----- Original Message ----- From:
2013 Jul 26
5
[FEEDBACK] Governance of GlusterFS project
Hello everyone, We are in the process of formalizing the governance model of the GlusterFS project. Historically, the governance of the project has been loosely structured. This is an invitation to all of you to participate in this discussion and provide your feedback and suggestions on how we should evolve a formal model. Feedback from this thread will be considered to the extent possible in
2018 Feb 01
2
How to trigger a resync of a newly replaced empty brick in replicate config ?
Hi, My volume home is configured in replicate mode (version 3.12.4) with the bricks server1:/data/gluster/brick1 server2:/data/gluster/brick1 server2:/data/gluster/brick1 was corrupted, so I killed gluster daemon for that brick on server2, umounted it, reformated it, remounted it and did a > gluster volume reset-brick home server2:/data/gluster/brick1 server2:/data/gluster/brick1 commit
2018 Mar 15
0
Disperse volume recovery and healing
Hi Victor, On Wed, Mar 14, 2018 at 12:30 AM, Victor T <hero_of_nothing_1 at hotmail.com> wrote: > I have a question about how disperse volumes handle brick failure. I'm > running version 3.10.10 on all systems. If I have a disperse volume in a > 4+2 configuration with 6 servers each serving 1 brick, and maintenance > needs to be performed on all systems, are there any
2017 Dec 12
0
reset-brick command questions
Hi Jorick, 1 - Why would I even need to specify the " HOSTNAME:BRICKPATH " twice? I just want to replace the disk and get it back into the volume. Reset brick command can be used in different scenarios. One more case could be where you just want to change the host name to IP address of that node of bricks. In this case also you will follow the same steps but just have to provide IP
2017 Dec 20
4
Syntax for creating arbiter volumes in gluster 4.0
Hi, The existing syntax in the gluster CLI for creating arbiter volumes is `gluster volume create <volname> replica 3 arbiter 1 <list of bricks>` . It means (or at least intended to mean) that out of the 3 bricks, 1 brick is the arbiter. There has been some feedback while implementing arbiter support in glusterd2 for glusterfs-4.0 that we should change this to `replica 2 arbiter
2018 Mar 16
0
Disperse volume recovery and healing
On Fri, Mar 16, 2018 at 4:57 AM, Victor T <hero_of_nothing_1 at hotmail.com> wrote: > Xavi, does that mean that even if every node was rebooted one at a time > even without issuing a heal that the volume would have no issues after > running gluster volume heal [volname] when all bricks are back online? > No. After bringing up one brick and before stopping the next one, you need