Will Rouesnel
2008-Jul-28 14:57 UTC
[Gluster-users] Still having problems with non-healing files
Taking extreme measures, I went through my home directory and deleted all the files from the underlying storage bricks which refused to self-heal and then allow them to repropagate via unison. This seemed to work for a while, however while running the latest TLA code I've all of a sudden got exactly the same problem back on the exact same files. Log + spec files attached. Unison is running via an SSH login to my personal user account, so I don't think that's the problem - this seems to be some type of persistent bug. - Will Rouesnel -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20080729/c4499f64/attachment.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: glusterfs.log Type: application/octet-stream Size: 964843 bytes Desc: not available URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20080729/c4499f64/attachment.obj> -------------- next part -------------- A non-text attachment was scrubbed... Name: glusterfs-client.vol Type: application/octet-stream Size: 1636 bytes Desc: not available URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20080729/c4499f64/attachment-0001.obj> -------------- next part -------------- A non-text attachment was scrubbed... Name: glusterfs-server.vol Type: application/octet-stream Size: 2693 bytes Desc: not available URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20080729/c4499f64/attachment-0002.obj>
Will Rouesnel
2008-Jul-29 02:53 UTC
[Gluster-users] Still having problems with non-healing files
Taking extreme measures, I went through my home directory and deleted all the files from the underlying storage bricks which refused to self-heal and then allow them to repropagate via unison. This seemed to work for a while, however while running the latest TLA code I've all of a sudden got exactly the same problem back on the exact same files. Log + spec files attached. Unison is running via an SSH login to my personal user account, so I don't think that's the problem - this seems to be some type of persistent bug. - Will Rouesnel -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20080729/e55e0bed/attachment.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: glusterfs.log Type: application/octet-stream Size: 964845 bytes Desc: not available URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20080729/e55e0bed/attachment.obj> -------------- next part -------------- A non-text attachment was scrubbed... Name: glusterfs-client.vol Type: application/octet-stream Size: 1636 bytes Desc: not available URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20080729/e55e0bed/attachment-0001.obj> -------------- next part -------------- A non-text attachment was scrubbed... Name: glusterfs-server.vol Type: application/octet-stream Size: 2693 bytes Desc: not available URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20080729/e55e0bed/attachment-0002.obj>
Will Rouesnel
2008-Jul-30 14:42 UTC
[Gluster-users] Still having problems with non-healing files
Taking extreme measures, I went through my home directory and deleted all the files from the underlying storage bricks which refused to self-heal and then allow them to repropagate via unison. This seemed to work for a while, however while running the latest TLA code I've all of a sudden got exactly the same problem back on the exact same files. Log (which includes spec files) is attached. Unison is running via an SSH login to my personal user account, so I don't think that's the problem - this seems to be some type of persistent bug. I'm also having some rather more disturbing issues of data going missing for no good reason - while it's possible a user is deleting it, it seems pretty unlikely. Are there any circumstances under which this could happen without actual delete commands being issued? - Will Rouesnel -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20080731/34e3c638/attachment.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: glusterfs.log Type: application/octet-stream Size: 964845 bytes Desc: not available URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20080731/34e3c638/attachment.obj>
Keith Freedman
2008-Aug-02 18:13 UTC
[Gluster-users] Still having problems with non-healing files
As far as I know.. autoheal is part of the AFR translator. if you want to mirror across those servers. (i''d do this on the server config, but you can do it on the client) then you need to add an afr translator to mirror some of your storage bricks and then unify your afr volumes Keith At 07:53 PM 7/28/2008, Will Rouesnel wrote:>Taking extreme measures, I went through my home directory and >deleted all the files from the underlying storage bricks which >refused to self-heal and then allow them to repropagate via unison. > >This seemed to work for a while, however while running the latest >TLA code I''ve all of a sudden got exactly the same problem back on >the exact same files. Log + spec files attached. > >Unison is running via an SSH login to my personal user account, so I >don''t think that''s the problem - this seems to be some type of persistent bug. > >- Will Rouesnel > > > >_______________________________________________ >Gluster-users mailing list >Gluster-users at gluster.org >http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
Keith Freedman
2008-Aug-02 18:13 UTC
[Gluster-users] Still having problems with non-healing files
As far as I know.. autoheal is part of the AFR translator. if you want to mirror across those servers. (i''d do this on the server config, but you can do it on the client) then you need to add an afr translator to mirror some of your storage bricks and then unify your afr volumes Keith At 07:53 PM 7/28/2008, Will Rouesnel wrote:>Taking extreme measures, I went through my home directory and >deleted all the files from the underlying storage bricks which >refused to self-heal and then allow them to repropagate via unison. > >This seemed to work for a while, however while running the latest >TLA code I''ve all of a sudden got exactly the same problem back on >the exact same files. Log + spec files attached. > >Unison is running via an SSH login to my personal user account, so I >don''t think that''s the problem - this seems to be some type of persistent bug. > >- Will Rouesnel > > > >_______________________________________________ >Gluster-users mailing list >Gluster-users at gluster.org >http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
Keith Freedman
2008-Aug-02 18:13 UTC
[Gluster-users] Still having problems with non-healing files
As far as I know.. autoheal is part of the AFR translator. if you want to mirror across those servers. (i''d do this on the server config, but you can do it on the client) then you need to add an afr translator to mirror some of your storage bricks and then unify your afr volumes Keith At 07:53 PM 7/28/2008, Will Rouesnel wrote:>Taking extreme measures, I went through my home directory and >deleted all the files from the underlying storage bricks which >refused to self-heal and then allow them to repropagate via unison. > >This seemed to work for a while, however while running the latest >TLA code I''ve all of a sudden got exactly the same problem back on >the exact same files. Log + spec files attached. > >Unison is running via an SSH login to my personal user account, so I >don''t think that''s the problem - this seems to be some type of persistent bug. > >- Will Rouesnel > > > >_______________________________________________ >Gluster-users mailing list >Gluster-users at gluster.org >http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
This probably is better posed in the gluster-devel mailing list, and it''s not a big deal and probably not something that needs to be solved, but I felt i should share the situation, just in case anyone else needs to do the same thing. So, I have 2 servers AFRing eachother. They had the same amount of disk space but on different sized drives. Since gluster just sits on top of the underlying filesystem, what I did was: /dev/disk1 mounts to /gluster /dev/disk2 mount to /gluster/extra in the gluster translator, I specify the source path as /gluster and mount it to /home so in home I now see /home/extra and /home/(all the other dirs) This works fine AFR seems to be happy. When I do a df, however, /home is reported as the same as /dev/disk1 rather than the combined values of /dev/disk1 and /dev/disk2 I certainly understand why, and realize it''d be quite difficult to resolve this issue. It''s not very full, so my concern is.. might gluster overfill /home/extra because it thinks it has more space than it does, or might it think the disk is full when it''s not and there''s still room to write files (if they''re being written in /home/extra). As a testament to gluster.. this non-standard configuration does seem to work just fine, and the underlying file systems are actually even different.. one is xfs the other is ext3 (mounted with user_xattr option). Ultimately, I''ll sanitize this configuration, but the fact that it works so well now is impressive. Keith
This probably is better posed in the gluster-devel mailing list, and it''s not a big deal and probably not something that needs to be solved, but I felt i should share the situation, just in case anyone else needs to do the same thing. So, I have 2 servers AFRing eachother. They had the same amount of disk space but on different sized drives. Since gluster just sits on top of the underlying filesystem, what I did was: /dev/disk1 mounts to /gluster /dev/disk2 mount to /gluster/extra in the gluster translator, I specify the source path as /gluster and mount it to /home so in home I now see /home/extra and /home/(all the other dirs) This works fine AFR seems to be happy. When I do a df, however, /home is reported as the same as /dev/disk1 rather than the combined values of /dev/disk1 and /dev/disk2 I certainly understand why, and realize it''d be quite difficult to resolve this issue. It''s not very full, so my concern is.. might gluster overfill /home/extra because it thinks it has more space than it does, or might it think the disk is full when it''s not and there''s still room to write files (if they''re being written in /home/extra). As a testament to gluster.. this non-standard configuration does seem to work just fine, and the underlying file systems are actually even different.. one is xfs the other is ext3 (mounted with user_xattr option). Ultimately, I''ll sanitize this configuration, but the fact that it works so well now is impressive. Keith
This probably is better posed in the gluster-devel mailing list, and it''s not a big deal and probably not something that needs to be solved, but I felt i should share the situation, just in case anyone else needs to do the same thing. So, I have 2 servers AFRing eachother. They had the same amount of disk space but on different sized drives. Since gluster just sits on top of the underlying filesystem, what I did was: /dev/disk1 mounts to /gluster /dev/disk2 mount to /gluster/extra in the gluster translator, I specify the source path as /gluster and mount it to /home so in home I now see /home/extra and /home/(all the other dirs) This works fine AFR seems to be happy. When I do a df, however, /home is reported as the same as /dev/disk1 rather than the combined values of /dev/disk1 and /dev/disk2 I certainly understand why, and realize it''d be quite difficult to resolve this issue. It''s not very full, so my concern is.. might gluster overfill /home/extra because it thinks it has more space than it does, or might it think the disk is full when it''s not and there''s still room to write files (if they''re being written in /home/extra). As a testament to gluster.. this non-standard configuration does seem to work just fine, and the underlying file systems are actually even different.. one is xfs the other is ext3 (mounted with user_xattr option). Ultimately, I''ll sanitize this configuration, but the fact that it works so well now is impressive. Keith
Keith, On Sun, Aug 3, 2008 at 8:32 AM, Keith Freedman <freedman at freeformit.com> wrote:> This probably is better posed in the gluster-devel mailing list, and > it's not a big deal and probably not something that needs to be > solved, but I felt i should share the situation, just in case anyone > else needs to do the same thing. > > > So, I have 2 servers AFRing eachother. They had the same amount of > disk space but on different sized drives. > Since gluster just sits on top of the underlying filesystem, what I did was: > > /dev/disk1 mounts to /gluster > /dev/disk2 mount to /gluster/extra > > in the gluster translator, I specify the source path as /gluster and > mount it to /home > > so in home I now see /home/extra and /home/(all the other dirs) > This works fine AFR seems to be happy. > > When I do a df, however, /home is reported as the same as /dev/disk1 > rather than the combined values of /dev/disk1 and /dev/disk2 > I certainly understand why, and realize it'd be quite difficult to > resolve this issue. > > It's not very full, so my concern is.. might gluster > overfill /home/extra because it thinks it has more space than it > does, or might it think the disk is full when it's not and there's > still room to write files (if they're being written in /home/extra).Yes this can happen as glusterfs wont have any idea that another disk partition has been mounted in one of the sub directories. You could use LVM and combine the two disks. Regards Krishna> > As a testament to gluster.. this non-standard configuration does seem > to work just fine, and the underlying file systems are actually even > different.. one is xfs the other is ext3 (mounted with user_xattr option). > > Ultimately, I'll sanitize this configuration, but the fact that it > works so well now is impressive. > > Keith > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users >
At 12:01 AM 8/6/2008, Krishna Srinivas wrote:>Keith, > > > It''s not very full, so my concern is.. might gluster > > overfill /home/extra because it thinks it has more space than it > > does, or might it think the disk is full when it''s not and there''s > > still room to write files (if they''re being written in /home/extra). > >Yes this can happen as glusterfs wont have any idea that another >disk partition has been mounted in one of the sub directories. > >You could use LVM and combine the two disks. > >Regards >KrishnaThanks for your response. Yes, I will put them on LVM, the problem is, I can''t really do that on these existing machines because I dont have anywhere to move the data while I create the LVM volumes. However, I''ll just make sure I resolve the issue before disk space becomes a problem. Keith
At 12:01 AM 8/6/2008, Krishna Srinivas wrote:>Keith, > > > It''s not very full, so my concern is.. might gluster > > overfill /home/extra because it thinks it has more space than it > > does, or might it think the disk is full when it''s not and there''s > > still room to write files (if they''re being written in /home/extra). > >Yes this can happen as glusterfs wont have any idea that another >disk partition has been mounted in one of the sub directories. > >You could use LVM and combine the two disks. > >Regards >KrishnaThanks for your response. Yes, I will put them on LVM, the problem is, I can''t really do that on these existing machines because I dont have anywhere to move the data while I create the LVM volumes. However, I''ll just make sure I resolve the issue before disk space becomes a problem. Keith
At 12:01 AM 8/6/2008, Krishna Srinivas wrote:>Keith, > > > It''s not very full, so my concern is.. might gluster > > overfill /home/extra because it thinks it has more space than it > > does, or might it think the disk is full when it''s not and there''s > > still room to write files (if they''re being written in /home/extra). > >Yes this can happen as glusterfs wont have any idea that another >disk partition has been mounted in one of the sub directories. > >You could use LVM and combine the two disks. > >Regards >KrishnaThanks for your response. Yes, I will put them on LVM, the problem is, I can''t really do that on these existing machines because I dont have anywhere to move the data while I create the LVM volumes. However, I''ll just make sure I resolve the issue before disk space becomes a problem. Keith