Joe Julian
2014-Jul-25 21:36 UTC
[Gluster-users] Can anyone else shed any light on this warning?
How can it come about? Is this from replacing a brick days ago? Can I prevent it from happening? [2014-07-25 07:00:29.287680] W [fuse-resolve.c:546:fuse_resolve_fd] 0-fuse-resolve: migration of basefd (ptr:0x7f17cb846444 inode-gfid:87544fde-9bad-46d8-b610-1a8c93b85113) did not complete, failing fop with EBADF (old-subvolume:gv-nova-3 new-subvolume:gv-nova-4) It's critical because it causes a segfault every time. :(
Pranith Kumar Karampuri
2014-Jul-26 05:36 UTC
[Gluster-users] [Gluster-devel] Can anyone else shed any light on this warning?
On 07/26/2014 03:06 AM, Joe Julian wrote:> How can it come about? Is this from replacing a brick days ago? Can I > prevent it from happening? > > [2014-07-25 07:00:29.287680] W [fuse-resolve.c:546:fuse_resolve_fd] > 0-fuse-resolve: migration of basefd > > (ptr:0x7f17cb846444 inode-gfid:87544fde-9bad-46d8-b610-1a8c93b85113) > did not complete, failing fop with > > EBADF (old-subvolume:gv-nova-3 new-subvolume:gv-nova-4) > > > It's critical because it causes a segfault every time. :(Joe, This is fd migration code. When a brick layout changes (graph change) the file needs to be re-opened in the new graph. This re-open seemed to have failed. It leads to crash probably because extra unref in failure code path. Could you add brick/mount logs to the bug https://bugzilla.redhat.com/show_bug.cgi?id=1123289. What is the configuration of the volume? pranith> > _______________________________________________ > Gluster-devel mailing list > Gluster-devel at gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-devel