Ernie Dunbar
2015-Feb-11 19:06 UTC
[Gluster-users] Can't create volume. Can't delete volume. Volume does not exist. Can't create volume.
I nuked the entire partition with mkfs, just to be *sure*, and I still get the error message: volume create: gv0: failed: /brick1/gv0 is already part of a volume Clearly, there's some bit of data being kept somewhere else besides in /brick1? On 2015-02-11 01:03, Kaushal M wrote:> This happens because of 2 things, > 1. GlusterFS writes an extended attribute containing the volume-id to every brick directory when a volume is created. This is done to prevent data being written to the root partition, in case the partition containing the brick wasn't mounted for any reason. 2. Deleting a GlusterFS volume does not remove any data in the brick directories and the brick directories themselves. We leave the decision of cleaning up the data to the user. The extended attribute is also not removed, so that an unused brick is not inadvertently added to another volume as it could lead to losing existing data. > > So if you want to reuse a brick, you need to clean it up and recreate the brick directory. > > On Wed, Feb 11, 2015 at 4:38 AM, Ernie Dunbar <maillist at lightspeed.ca> wrote: > >> I'm just going to paste this here to see if it drives you as mad as it does me. >> >> I'm trying to re-create a new volume in gluster. The old volume is empty and can be removed. And besides that, this is just an experimental server that isn't in production just yet. Who cares. I just want to start over again because it's not working. >> >> root at nfs1:/home/ernied# gluster >> gluster> vol create gv0 replica 2 nfs1:/brick1/gv0 nfs2:/brick1/gv0 >> volume create: gv0: failed: /brick1/gv0 is already part of a volume >> gluster> vol in >> No volumes present >> gluster> root at nfs1:/home/ernied# ^C >> root at nfs1:/home/ernied# rm -r /brick1/gv0 >> root at nfs1:/home/ernied# gluster >> gluster> vol create gv0 replica 2 nfs1:/brick1/gv0 nfs2:/brick1/gv0 >> volume create: gv0: failed: Host nfs2 is not in 'Peer in Cluster' state >> gluster> peer probe nfs2 >> peer probe: success. Host nfs2 port 24007 already in peer list >> gluster> volume list >> No volumes present in cluster >> gluster> volume delete gv0 >> Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y >> volume delete: gv0: failed: Volume gv0 does not exist >> gluster> vol create gv0 replica 2 nfs1:/brick1/gv0 nfs2:/brick1/gv0 >> volume create: gv0: failed: /brick1/gv0 is already part of a volume >> gluster> vol create evil replica 2 nfs1:/brick1/gv0 nfs2:/brick1/gv0 >> volume create: evil: failed: /brick1/gv0 is already part of a volume >> gluster> >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> http://www.gluster.org/mailman/listinfo/gluster-users [1]Links: ------ [1] http://www.gluster.org/mailman/listinfo/gluster-users -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150211/02bc16e2/attachment.html>
Atin Mukherjee
2015-Feb-12 10:55 UTC
[Gluster-users] Can't create volume. Can't delete volume. Volume does not exist. Can't create volume.
On 02/12/2015 12:36 AM, Ernie Dunbar wrote:> > > I nuked the entire partition with mkfs, just to be *sure*, and I still > get the error message: > > volume create: gv0: failed: /brick1/gv0 is already part of a volume > > Clearly, there's some bit of data being kept somewhere else besides in > /brick1?This shouldn't happen until you have an existing volume or you haven't removed the xattrs. Can you please double check the output of gluster volume info? Also you can query for the xattr with this path. ~Atin> > On 2015-02-11 01:03, Kaushal M wrote: > >> This happens because of 2 things, >> 1. GlusterFS writes an extended attribute containing the volume-id to every brick directory when a volume is created. This is done to prevent data being written to the root partition, in case the partition containing the brick wasn't mounted for any reason. 2. Deleting a GlusterFS volume does not remove any data in the brick directories and the brick directories themselves. We leave the decision of cleaning up the data to the user. The extended attribute is also not removed, so that an unused brick is not inadvertently added to another volume as it could lead to losing existing data. >> >> So if you want to reuse a brick, you need to clean it up and recreate the brick directory. >> >> On Wed, Feb 11, 2015 at 4:38 AM, Ernie Dunbar <maillist at lightspeed.ca> wrote: >> >>> I'm just going to paste this here to see if it drives you as mad as it does me. >>> >>> I'm trying to re-create a new volume in gluster. The old volume is empty and can be removed. And besides that, this is just an experimental server that isn't in production just yet. Who cares. I just want to start over again because it's not working. >>> >>> root at nfs1:/home/ernied# gluster >>> gluster> vol create gv0 replica 2 nfs1:/brick1/gv0 nfs2:/brick1/gv0 >>> volume create: gv0: failed: /brick1/gv0 is already part of a volume >>> gluster> vol in >>> No volumes present >>> gluster> root at nfs1:/home/ernied# ^C >>> root at nfs1:/home/ernied# rm -r /brick1/gv0 >>> root at nfs1:/home/ernied# gluster >>> gluster> vol create gv0 replica 2 nfs1:/brick1/gv0 nfs2:/brick1/gv0 >>> volume create: gv0: failed: Host nfs2 is not in 'Peer in Cluster' state >>> gluster> peer probe nfs2 >>> peer probe: success. Host nfs2 port 24007 already in peer list >>> gluster> volume list >>> No volumes present in cluster >>> gluster> volume delete gv0 >>> Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y >>> volume delete: gv0: failed: Volume gv0 does not exist >>> gluster> vol create gv0 replica 2 nfs1:/brick1/gv0 nfs2:/brick1/gv0 >>> volume create: gv0: failed: /brick1/gv0 is already part of a volume >>> gluster> vol create evil replica 2 nfs1:/brick1/gv0 nfs2:/brick1/gv0 >>> volume create: evil: failed: /brick1/gv0 is already part of a volume >>> gluster> >>> >>> _______________________________________________ >>> Gluster-users mailing list >>> Gluster-users at gluster.org >>> http://www.gluster.org/mailman/listinfo/gluster-users [1] > > > > Links: > ------ > [1] http://www.gluster.org/mailman/listinfo/gluster-users > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users >-- ~Atin
Justin Clift
2015-Feb-12 15:21 UTC
[Gluster-users] Can't create volume. Can't delete volume. Volume does not exist. Can't create volume.
On 11 Feb 2015, at 19:06, Ernie Dunbar <maillist at lightspeed.ca> wrote:> I nuked the entire partition with mkfs, just to be *sure*, and I still get the error message: > > volume create: gv0: failed: /brick1/gv0 is already part of a volume > > Clearly, there's some bit of data being kept somewhere else besides in /brick1?Yeah, this frustrates the heck out of me every time too. As a thought, did you nuke the /brick1/gv0 directory on *both* of the servers? Looking at the cut-n-pasted log below, it seems like you nuked the dir on nfs1, but not on nfs2. And it'd probably really help on our end if the "failed: /brick1/gv0 is already part of a volume" message included the node name as well, just for it to be super clear. ;) + Justin -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift