We had a brick drop out on us when xfs shutdown the filesystem due to some errors. a prety quick check/repair/check and remounts later, the filesystem SEEMS healthy, restarted glusterd on the system and things SEEMED okay, but now in the brick log: [2013-05-02 12:36:20.117170] W [marker-quota.c:2047:mq_inspect_directory_xattr] 0-gstore-marker: cannot add a new contribution node [2013-05-02 12:36:20.121419] E [marker-quota-helper.c:230:mq_dict_set_contribution] (-->/usr/lib64/glusterfs/3.3.1/xlator/debug/io-stats.so(io_stats_lookup+0x13e) [0x7fed88597a3e] (-->/usr/lib64/glusterfs/3.3.1/xlator/features/marker.so(marker_lookup+0x300) [0x7fed887at [2013-05-02 12:36:20.122146] W [marker-quota.c:2047:mq_inspect_directory_xattr] 0-gstore-marker: cannot add a new contribution node [2013-05-02 12:36:20.163452] E [marker-quota-helper.c:230:mq_dict_set_contribution] (-->/usr/lib64/glusterfs/3.3.1/xlator/debug/io-stats.so(io_stats_lookup+0x13e) [0x7fed88597a3e] (-->/usr/lib64/glusterfs/3.3.1/xlator/features/marker.so(marker_lookup+0x300) [0x7fed887at [2013-05-02 12:36:20.164418] W [marker-quota.c:2047:mq_inspect_directory_xattr] 0-gstore-marker: cannot add a new contribution node [2013-05-02 12:36:20.165283] E [marker-quota-helper.c:230:mq_dict_set_contribution] (-->/usr/lib64/glusterfs/3.3.1/xlator/debug/io-stats.so(io_stats_lookup+0x13e) [0x7fed88597a3e] (-->/usr/lib64/glusterfs/3.3.1/xlator/features/marker.so(marker_lookup+0x300) [0x7fed887at [2013-05-02 12:36:20.165878] W [marker-quota.c:2047:mq_inspect_directory_xattr] 0-gstore-marker: cannot add a new contribution node [2013-05-02 12:36:20.170552] E [marker-quota-helper.c:230:mq_dict_set_contribution] (-->/usr/lib64/glusterfs/3.3.1/xlator/debug/io-stats.so(io_stats_lookup+0x13e) [0x7fed88597a3e] (-->/usr/lib64/glusterfs/3.3.1/xlator/features/marker.so(marker_lookup+0x300) [0x7fed887at [2013-05-02 12:36:20.171489] W [marker-quota.c:2047:mq_inspect_directory_xattr] 0-gstore-marker: cannot add a new contribution node endlessly. any thoughts? I'm not finding much on this sort of error out there. This is a 10x2 dist-replicated volume. worst case senario, i want to pull this node (and i assume i've got to pull its twin), whats the best way to preserve that data on there? "remove" the nodes, and then sync the data from one brick (since they should be the same) over to the client mount to get it on the other 9 nodes (distributed), then wipe/re-add the 2 bricks i just removed and reblance? -- Matthew Nicholson Research Computing Specialist Harvard FAS Research Computing matthew_nicholson at harvard.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130502/938806ac/attachment.html>