similar to: Can't stop or delete volume

Displaying 20 results from an estimated 5000 matches similar to: "Can't stop or delete volume"

2011 Aug 12
2
Replace brick of a dead node
Hi! Seeking pardon from the experts, but I have a basic usage question that I could not find a straightforward answer to. I have a two node cluster, with two bricks replicated, one on each node. Lets say one of the node dies and is unreachable. I want to be able to spin a new node and replace the dead node's brick to a location on the new node. The command 'gluster volume
2012 Nov 14
1
Howto find out volume topology
Hello, I would like to find out the topology of an existing volume. For example, if I have a distributed replicated volume, what bricks are the replication partners? Fred -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20121114/b203ea91/attachment.html>
2011 Sep 16
2
Can't replace dead peer/brick
I have a simple setup: gluster> volume info Volume Name: myvolume Type: Distributed-Replicate Status: Started Number of Bricks: 3 x 2 = 6 Transport-type: tcp Bricks: Brick1: 10.2.218.188:/srv Brick2: 10.116.245.136:/srv Brick3: 10.206.38.103:/srv Brick4: 10.114.41.53:/srv Brick5: 10.68.73.41:/srv Brick6: 10.204.129.91:/srv I *killed* Brick #4 (kill -9 and then shut down instance). My
2012 Jan 22
2
Best practices?
Suppose I start building nodes with (say) 24 drives each in them. Would the standard/recommended approach be to make each drive its own filesystem, and export 24 separate bricks, server1:/data1 .. server1:/data24 ? Making a distributed replicated volume between this and another server would then have to list all 48 drives individually. At the other extreme, I could put all 24 drives into some
2011 Jul 25
1
Problem with Gluster Geo Replication, status faulty
Hi, I've setup Gluster Geo Replication according the manual, # sudo gluster volume geo-replication flvol ssh://root at ec2-67-202-22-159.compute-1.amazonaws.com:file:///mnt/slave config log-level DEBUG #sudo gluster volume geo-replication flvol ssh://root at ec2-67-202-22-159.compute-1.amazonaws.com:file:///mnt/slave start #sudo gluster volume geo-replication flvol ssh://root at
2004 Jan 30
2
FW: QoS extension to Net-SNMP
Hi. I did send this to `jaazz@post.cz'', but I suspect the list is a more appropriate/useful place for it. It''s a question about Michal Charvat''s QoS extension to Net-SNMP. When I look at the MIB entries for the QoS handles, I get something like this - enterprises.qos.qosObjectTable.qosObject.qosHandle.0.0 = Gauge32: 0
2011 Aug 01
1
[Gluster 3.2.1] Réplication issues on a two bricks volume
Hello, I have installed GlusterFS one month ago, and replication have many issues : First of all, our infrastructure, 2 storage array of 8Tb in replication mode... We have our backups file on this arrays, so 6Tb of datas. I want replicate datas on the second storrage array, so, i use this command : # gluster volume rebalance REP_SVG migrate-data start And gluster start to replicate, in 2 weeks
2011 Jul 01
1
keep 2 dirs in sync
I don't think there's a direct way to do this with rsync but I want to make sure I'm not missing something. I have two hosts (my portable and my desktop). I work on both hosts at different times and so I keep a few dirs sync'd between the two. I have a docs dir where I may be modifying files, adding files, renaming files and deleting files on *either* host. I have a nightly
2011 Jun 06
2
Gluster 3.2.0 and ucarp not working
Hello everybody. I have a problem setting up gluster failover funcionality. Based on manual i setup ucarp which is working well ( tested with ping/ssh etc ) But when i use virtual address for gluster volume mount and i turn off one of nodes machine/gluster will freeze until node is back online. My virtual ip is 3.200 and machine real ip is 3.233 and 3.5. In gluster log i can see: [2011-06-06
2011 May 05
1
CIFS Documentation
Hello, It would be a good idea to update the Documentation about CIFS: http://gluster.com/community/documentation/index.php/Gluster_3.2:_Exporting_Gluster_Volumes_Through_Samba When the simple truth is, that there is no CIFS Support in gluster itself - it should not be in the docs. As I found out, this was also suggested earlier: http://www.mail-archive.com/gluster-users at
2012 Oct 20
1
Gluster download link redirect to redhat
Dear Team, Please note that many download links of gluster.org redirect to redhat.com. Please refer below links and correct download link. http://gluster.org/community/documentation/index.php/Gluster_3.2:_Downloading_and_Installing_the_Gluster_Virtual_Storage_Appliance_for_KVM Click on link and try to download Gluster virtual storage appliance for kvm but it
2018 Feb 07
2
Ip based peer probe volume create error
On 8/02/2018 4:45 AM, Gaurav Yadav wrote: > After seeing command history, I could see that you have 3 nodes, and > firstly you are peer probing 51.15.90.60? and 163.172.151.120 from? > 51.15.77.14 > So here itself you have 3 node cluster, after all this you are going > on node 2 and again peer probing 51.15.77.14. > ?Ideally it should work, with above steps, but due to some
2018 Feb 06
5
strange hostname issue on volume create command with famous Peer in Cluster state error message
Hello, i installed glusterfs 3.11.3 version 3 nodes ubuntu 16.04 machine. All machines have same /etc/hosts. node1 hostname pri.ostechnix.lan node2 hostname sec.ostechnix.lan node2 hostname third.ostechnix.lan 51.15.77.14 pri.ostechnix.lan pri 51.15.90.60 sec.ostechnix.lan sec 163.172.151.120 third.ostechnix.lan third volume create command is root at
2017 Nov 30
2
Problems joining new gluster 3.10 nodes to existing 3.8
Hi, I have a problem joining four Gluster 3.10 nodes to an existing Gluster 3.8 nodes. My understanding that this should work and not be too much of a problem. Peer robe is successful but the node is rejected: gluster> peer detach elkpinfglt07 peer detach: success gluster> peer probe elkpinfglt07 peer probe: success. gluster> peer status Number of Peers: 6 Hostname: elkpinfglt02
2018 Feb 06
0
strange hostname issue on volume create command with famous Peer in Cluster state error message
I'm guessing there's something wrong w.r.t address resolution on node 1. >From the logs it's quite clear to me that node 1 is unable to resolve the address configured in /etc/hosts where as the other nodes do. Could you paste the gluster peer status output from all the nodes? Also can you please check if you're able to ping "pri.ostechnix.lan" from node1 only? Does
2011 Apr 20
1
add brick unsuccessful
Hi again, I'm having trouble testing the add-brick feature. I'm using a replicate 2 setup with currently 2 nodes in and am trying to add 2 more. The command blocks for a bit until i get an "Add Brick unsuccessful" message. According to etc-glusterfs-glusterd.vol.log below i can't seem to find anything : [2011-04-20 12:55:06.944593] I
2017 Jun 20
2
trash can feature, crashed???
All, I currently have 2 bricks running Gluster 3.10.1. This is a Centos installation. On Friday last week, I enabled the trashcan feature on one of my volumes: gluster volume set date01 features.trash on I also limited the max file size to 500MB: gluster volume set data01 features.trash-max-filesize 500MB 3 hours after that I enabled this, this specific gluster volume went down: [2017-06-16
2018 Mar 21
2
Brick process not starting after reinstall
Hi all, our systems have suffered a host failure in a replica three setup. The host needed a complete reinstall. I followed the RH guide to 'replace a host with the same hostname' (https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/administration_guide/sect-replacing_hosts). The machine has the same OS (CentOS 7). The new machine got a minor version number newer
2017 Aug 17
3
Glusterd not working with systemd in redhat 7
Hi Team, I noticed that glusterd is never starting when i reboot my Redhat 7.1 server. Service is enabled but don't works. I tested with gluster 3.10.4 & gluster 3.10.5 and the problem still exists. When i started the service manually this works. I'va also tested on Redhat 6.6 server and gluster 3.10.4 and this works fine. The problem seems to be related to Redhat 7.1 This is
2018 Mar 06
4
Fixing a rejected peer
Hello, So I'm seeing a rejected peer with 3.12.6. This is with a replica 3 volume. It actually began as the same problem with a different peer. I noticed with (call it) gluster-2, when I couldn't make a new volume. I compared /var/lib/glusterd between them, and found that somehow the options in one of the vols differed. (I suspect this was due to attempting to create the volume via the