Displaying 5 results from an estimated 5 matches for "_brick_restoration_".
2011 Sep 16
2
Can't replace dead peer/brick
I have a simple setup:
gluster> volume info
Volume Name: myvolume
Type: Distributed-Replicate
Status: Started
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: 10.2.218.188:/srv
Brick2: 10.116.245.136:/srv
Brick3: 10.206.38.103:/srv
Brick4: 10.114.41.53:/srv
Brick5: 10.68.73.41:/srv
Brick6: 10.204.129.91:/srv
I *killed* Brick #4 (kill -9 and then shut down instance).
My
2012 Jan 05
1
Can't stop or delete volume
Hi,
I can't stop or delete a replica volume:
# gluster volume info
Volume Name: sync1
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: thinkpad:/gluster/export
Brick2: quad:/raid/gluster/export
# gluster volume stop sync1
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
Volume sync1 does not exist
# gluster volume
2011 Aug 12
2
Replace brick of a dead node
Hi!
Seeking pardon from the experts, but I have a basic usage question that I could not find a straightforward answer to.
I have a two node cluster, with two bricks replicated, one on each node.
Lets say one of the node dies and is unreachable.
I want to be able to spin a new node and replace the dead node's brick to a location on the new node.
The command 'gluster volume
2012 Nov 14
1
Howto find out volume topology
Hello,
I would like to find out the topology of an existing volume. For example,
if I have a distributed replicated volume, what bricks are the replication
partners?
Fred
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20121114/b203ea91/attachment.html>
2012 Jan 22
2
Best practices?
Suppose I start building nodes with (say) 24 drives each in them.
Would the standard/recommended approach be to make each drive its own
filesystem, and export 24 separate bricks, server1:/data1 ..
server1:/data24 ? Making a distributed replicated volume between this and
another server would then have to list all 48 drives individually.
At the other extreme, I could put all 24 drives into some