Now I understand the situation better.
Gluster uses hard links (2 entries pointing to the same inode) and untill the
hard links are not deleted, the data will be still there.
The simplest approach is to move everything from the brick to the FUSE mount
point and then wipe the old brick (which is not part of thr volume).
Another approach is to use utilities like 'tree --inodes' , find (for
example find /gluster_bricks? -exec ls -li {} \;) or 'ls -li' to collect
the list of files and their inodes.
The hard links are in the .glusterfs directory and after a successful move you
can delete them.
If you already moved (but no 'cp') away some files, you can identify
them like this:
?find .glusterfs -iname '*' exec stat {} \; grep -E 'File|Links:
1'
Best Regards,Strahil Nikolov????
On Sun, Nov 14, 2021 at 15:29, Taste-Of-IT<kontakt at taste-of-it.de>
wrote: Hi,
and thanks for your help, but i think you do not understand the situation right.
The volume is dead, i couldnt reuse it. I reinstalled the os and added the
storage with the old volume. So there is acutally no vol1 which i can use in
glusterfs. All i have is the old data struction with .glusterfs files and so on.
I now want to migrate all files from vol1 to new created and working vol2. But
if i move the files direct from directory on each node to mounted new vol2, the
disk size remains the same - disk space isnt freed up.
What can i do? getfattr shows nothing. If i move one folder and look into the
.glusterfs, the folder seems to be removed, but df -h shows same free disk space
and so i run into trouble.
Thx
Taste
Am 25.10.2021 21:04:10, schrieb Strahil Nikolov:> To be honest , I can't imagine the problem actually.
>
> When you reuse bricks you have two options:
> 1. Recreate the filesystem. It's simpler and easier
> 2. Do the following:
> Delete all previously existing data in the brick, including the .glusterfs
subdirectory.
> Run # setfattr -x trusted.glusterfs.volume-id brick and # setfattr -x
trusted.gfid brick to remove the attributes from the root of the brick.
> Run # getfattr -d -m . brick to examine the attributes set on the volume.
Take note of the attributes.
> Run # setfattr -x attribute brick to remove the attributes relating to the
glusterFS file system.
> The trusted.glusterfs.dht attribute for a distributed volume is one such
example of attributes that need to be removed. It is necessary to remove the
extended attributes `trusted.gfid` and `trusted.glusterfs.volume-id` which are
unique for every Gluster brick. These attributes are created the first time a
brick gets added to a volume.
>
> As you still have a ".glusterd" you didn't reintegrate the
brick.
>
> The only other option I know is to use add-brick with the "force"
option.
>
> Can you provide a short summary (commands only) of how the issue happened,
what you did and what error is coming up ?
>
>
> Best Regards,
> Strahil Nikolov?
>
>
>
>
>
>
> ? ?????, 20 ???????? 2021 ?., 14:06:29 ?. ???????+3, Taste-Of-IT
<kontakt at taste-of-it.de> ??????:
>
>
>
>
>
> Hi,
>
> i now moving from dead vol1 to new vol2 mounted via nfs.
>
> The problem is, that the storage rises and not as expected stay the same.
Any idea? I think it has something to do with the .glusterfs direcoties on dead
vol1.
>
> thx
>
> Webmaster Taste-of-IT.de<br/><br/>Am 29.08.2021 12:42:18,
schrieb Strahil Nikolov:
> > Best case scenario, you just mount via FUSE on the 'dead' node
and start copying.
> > Yet, in your case you don't have enough space. I guess you can try
on 2 VMs to simulate the failure, rebuild and then forcefully re-add the old
brick. It might work, it might not ... at least it's worth trying.
> >
> > Best Regards,Strahil Nikolov
> >
> > Sent from Yahoo Mail on Android
> >?
> >? On Thu, Aug 26, 2021 at 15:27, Taste-Of-IT<kontakt at
taste-of-it.de> wrote:? Hi,
> > what do you mean? Copy the data from dead node to runnig node and than
add the new installed node to existing vol1, after that running rebalance? If
so, this is not possible, because node1 has not enough free space to take all
from node2.
> >
> > thx
> >
> > Am 22.08.2021 18:35:33, schrieb Strahil Nikolov:
> > > Hi,
> > >
> > > the best way is to copy the files over the FUSE mount and later
add the bricks and rebalance.
> > > Best Regards,Strahil Nikolov
> > >
> > > Sent from Yahoo Mail on Android
> > >?
> > >? On Thu, Aug 19, 2021 at 23:04, Taste-Of-IT<kontakt at
taste-of-it.de> wrote:? Hello,
> > >
> > > i have two nodes with a distributed volume. OS is on a separate
disk which crashed on one node. However i can reinstall the os and the raid6
which is used vor the distributed volume was rebuild. The question now is, how
to re-add the brick with the volume back to the existing old volume.
> > >
> > > If this is not possible what is with this idea: i create a new
vol2 with distributed over both nodes and move the files direkt from directory
to new volume via nfs-ganesha share?!
> > >
> > > thx
> > > ________
> > >
> > >
> > >
> > > Community Meeting Calendar:
> > >
> > > Schedule -
> > > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> > > Bridge: https://meet.google.com/cpu-eiue-hvk
> > > Gluster-users mailing list
> > > Gluster-users at gluster.org
> > > https://lists.gluster.org/mailman/listinfo/gluster-users
>
> > >?
> > >
> > ________
> >
> >
> >
> > Community Meeting Calendar:
> >
> > Schedule -
> > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> > Bridge: https://meet.google.com/cpu-eiue-hvk
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users
> >?
> >
> ________
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
________
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users at gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20211115/c443f273/attachment.html>