Lonni J Friedman
2012-Sep-18 18:03 UTC
[Gluster-users] cannot create a new volume with a brick that used to be part of a deleted volume?
Greetings, I'm running v3.3.0 on Fedora16-x86_64. I used to have a replicated volume on two bricks. This morning I deleted it successfully: ######## [root at farm-ljf0 ~]# gluster volume stop gv0 Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y Stopping volume gv0 has been successful [root at farm-ljf0 ~]# gluster volume delete gv0 Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y Deleting volume gv0 has been successful [root at farm-ljf0 ~]# gluster volume info all No volumes present ######## I then attempted to create a new volume using the same bricks that used to be part of the (now) deleted volume, but it keeps refusing & failing claiming that the brick is already part of a volume: ######## [root at farm-ljf1 ~]# gluster volume create gv0 rep 2 transport tcp 10.31.99.165:/mnt/sdb1 10.31.99.166:/mnt/sdb1 /mnt/sdb1 or a prefix of it is already part of a volume [root at farm-ljf1 ~]# gluster volume info all No volumes present ######## Note farm-ljf0 is 10.31.99.165 and farm-ljf1 is 10.31.99.166. I also tried restarting glusterd (and glusterfsd) hoping that might clear things up, but it had no impact. How can /mnt/sdb1 be part of a volume when there are no volumes present? Is this a bug, or am I just missing something obvious? thanks
harry mangalam
2012-Sep-18 18:18 UTC
[Gluster-users] cannot create a new volume with a brick that used to be part of a deleted volume?
I believe gluster writes 2 entries into the top level of your gluster brick filesystems: -rw-r--r-- 2 root root 36 2012-06-22 15:58 .gl.mount.check drw------- 258 root root 8192 2012-04-16 13:20 .glusterfs You will have to remove these as well as all the other fs info from the volume to re-add the fs as another brick. Or just remake the filesystem - instantaneous with XFS, less so with ext4. hjm On Tuesday, September 18, 2012 11:03:35 AM Lonni J Friedman wrote:> Greetings, > I'm running v3.3.0 on Fedora16-x86_64. I used to have a replicated > volume on two bricks. This morning I deleted it successfully: > ######## > [root at farm-ljf0 ~]# gluster volume stop gv0 > Stopping volume will make its data inaccessible. Do you want to > continue? (y/n) y > Stopping volume gv0 has been successful > [root at farm-ljf0 ~]# gluster volume delete gv0 > Deleting volume will erase all information about the volume. Do you > want to continue? (y/n) y > Deleting volume gv0 has been successful > [root at farm-ljf0 ~]# gluster volume info all > No volumes present > ######## > > I then attempted to create a new volume using the same bricks that > used to be part of the (now) deleted volume, but it keeps refusing & > failing claiming that the brick is already part of a volume: > ######## > [root at farm-ljf1 ~]# gluster volume create gv0 rep 2 transport tcp > 10.31.99.165:/mnt/sdb1 10.31.99.166:/mnt/sdb1 > /mnt/sdb1 or a prefix of it is already part of a volume > [root at farm-ljf1 ~]# gluster volume info all > No volumes present > ######## > > Note farm-ljf0 is 10.31.99.165 and farm-ljf1 is 10.31.99.166. I also > tried restarting glusterd (and glusterfsd) hoping that might clear > things up, but it had no impact. > > How can /mnt/sdb1 be part of a volume when there are no volumes present? > Is this a bug, or am I just missing something obvious? > > thanks > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users-- Harry Mangalam - Research Computing, OIT, Rm 225 MSTB, UC Irvine [m/c 2225] / 92697 Google Voice Multiplexer: (949) 478-4487 415 South Circle View Dr, Irvine, CA, 92697 [shipping] MSTB Lat/Long: (33.642025,-117.844414) (paste into Google Maps) -- What does it say about a society that would rather send its children to kill and die for oil than to get on a bike?
Kaleb Keithley
2012-Sep-18 18:26 UTC
[Gluster-users] cannot create a new volume with a brick that used to be part of a deleted volume?
There are xattrs on the top-level directory of the old brick volume that gluster is detecting causing this. I personally always create my bricks on a subdir. If you do that you can simply rmdir/mkdir the directory when you want to delete a gluster volume. You can clear the xattrs or "nuke it from orbit" with mkfs on the volume device. ----- Original Message ----- From: "Lonni J Friedman" <netllama at gmail.com> To: gluster-users at gluster.org Sent: Tuesday, September 18, 2012 2:03:35 PM Subject: [Gluster-users] cannot create a new volume with a brick that used to be part of a deleted volume? Greetings, I'm running v3.3.0 on Fedora16-x86_64. I used to have a replicated volume on two bricks. This morning I deleted it successfully: ######## [root at farm-ljf0 ~]# gluster volume stop gv0 Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y Stopping volume gv0 has been successful [root at farm-ljf0 ~]# gluster volume delete gv0 Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y Deleting volume gv0 has been successful [root at farm-ljf0 ~]# gluster volume info all No volumes present ######## I then attempted to create a new volume using the same bricks that used to be part of the (now) deleted volume, but it keeps refusing & failing claiming that the brick is already part of a volume: ######## [root at farm-ljf1 ~]# gluster volume create gv0 rep 2 transport tcp 10.31.99.165:/mnt/sdb1 10.31.99.166:/mnt/sdb1 /mnt/sdb1 or a prefix of it is already part of a volume [root at farm-ljf1 ~]# gluster volume info all No volumes present ######## Note farm-ljf0 is 10.31.99.165 and farm-ljf1 is 10.31.99.166. I also tried restarting glusterd (and glusterfsd) hoping that might clear things up, but it had no impact. How can /mnt/sdb1 be part of a volume when there are no volumes present? Is this a bug, or am I just missing something obvious? thanks _______________________________________________ Gluster-users mailing list Gluster-users at gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Doug Hunley
2012-Sep-20 18:56 UTC
[Gluster-users] cannot create a new volume with a brick that used to be part of a deleted volume?
On Thu, Sep 20, 2012 at 2:47 PM, Joe Julian <joe at julianfamily.org> wrote:> Because it's a vastly higher priority to preserve data. Just because I > delete a volume doesn't mean I want the data deleted. In fact, more often > than not, it's quite the opposite. The barrier to data loss is high, and it > should remain high.OK, again I'll ask: what is a typical scenario for me as a gluster admin to delete a volume and want to add one (or more) of its former bricks to another volume and keep that data in tact? I can't think of a real world example. But I assume they exist as this is the model taken by gluster.. -- Douglas J Hunley (doug.hunley at gmail.com) Twitter: @hunleyd Web: douglasjhunley.com G+: http://goo.gl/sajR3
Joe Julian
2012-Sep-21 06:11 UTC
[Gluster-users] cannot create a new volume with a brick that used to be part of a deleted volume?
Adding a --yes-i-know-what-im-doing type option is something I would get behind (and have suggested, myself). File a bug report as an enhancement request. "Dr. J?rg Petersen" <joerg.h.petersen at googlemail.com> wrote:>Hello, > > what I regularly do: >1) Create a snapshot (btrfs) of Brick >2) reassemble the snapshots into an new (Snapshot-) Gluster-Volume > >When Reassembling the snapshots I have to remove all xattr's and >.gluster-Directory. >Since btrfs is painfully slow in deleting, I would prefer an option to >reuse the Content, wich should be valid for the new (Snapshot-) >Gluster-Volume... > >Greetings, >J?rg > > >Am 20.09.2012 20:56, schrieb Doug Hunley: >> On Thu, Sep 20, 2012 at 2:47 PM, Joe Julian <joe at julianfamily.org> wrote: >>> Because it's a vastly higher priority to preserve data. Just because I >>> delete a volume doesn't mean I want the data deleted. In fact, more often >>> than not, it's quite the opposite. The barrier to data loss is high, and it >>> should remain high. >> OK, again I'll ask: what is a typical scenario for me as a gluster >> admin to delete a volume and want to add one (or more) of its former >> bricks to another volume and keep that data in tact? I can't think of >> a real world example. But I assume they exist as this is the model >> taken by gluster.. >> >