Could you please provide the backtrace of the dump? and the complete client log
at the time of crash?
Is the crash seen on 3.6?
Regards,
Poornima
----- Original Message -----
> From: "Josh Boon" <gluster at joshboon.com>
> To: "Gluster-users at gluster.org List" <gluster-users at
gluster.org>
> Sent: Sunday, March 15, 2015 7:55:52 PM
> Subject: Re: [Gluster-users] QEMU gfapi segfault
> Any more thoughts on this? I'm considering a rollback to Gluster 3.4 as
I
> never had these issues and this is keeping me awake at night.
> ----- Original Message -----
> From: "Josh Boon" <gluster at joshboon.com>
> To: "RAGHAVENDRA TALUR" <raghavendra.talur at gmail.com>
> Cc: "Gluster-users at gluster.org List" <gluster-users at
gluster.org>
> Sent: Friday, March 6, 2015 7:05:33 AM
> Subject: Re: [Gluster-users] QEMU gfapi segfault
> Th qemu log is also the client log. The client was configured for info
> notices only; I've since turned it up to debug level in case I can get
more
> but I don't remember the client log being that interesting.
> ----- Original Message -----
> From: "RAGHAVENDRA TALUR" <raghavendra.talur at gmail.com>
> To: "Josh Boon" <gluster at joshboon.com>
> Cc: "Vijay Bellur" <vbellur at redhat.com>,
"Gluster-users at gluster.org List"
> <gluster-users at gluster.org>
> Sent: Friday, March 6, 2015 8:17:08 AM
> Subject: Re: [Gluster-users] QEMU gfapi segfault
> On Fri, Mar 6, 2015 at 4:50 AM, Josh Boon < gluster at joshboon.com >
wrote:
> > segfault on replica1
>
> > Mar 3 22:40:08 HFMHVR3 kernel: [11430546.394720]
qemu-system-x86[14267]:
> > segfault at 128 ip 00007f4812d945cc sp 00007f4816da48a0 error 4 in
> > qemu-system-x86_64[7f4812a08000+4b1000]
>
> > The qemu logs only show the client shutting down on replica1
>
> > 2015-03-03 23:10:14.928+0000: shutting down
>
> > The heal logs on replica1
>
> > [2015-03-03 23:03:01.706880] I
> > [afr-self-heald.c:1690:afr_dir_exclusive_crawl] 0-VMARRAY-replicate-0:
> > Another crawl is in progress for VMARRAY-client-0
>
> > [2015-03-03 23:13:01.776026] I
> > [afr-self-heald.c:1690:afr_dir_exclusive_crawl] 0-VMARRAY-replicate-0:
> > Another crawl is in progress for VMARRAY-client-0
>
> > The heal logs on replica2
>
> > [2015-03-03 23:02:34.480041] I
> > [afr-self-heald.c:1690:afr_dir_exclusive_crawl] 0-VMARRAY-replicate-0:
> > Another crawl is in progress for VMARRAY-client-1
>
> > [2015-03-03 23:12:34.539420] I
> > [afr-self-heald.c:1690:afr_dir_exclusive_crawl] 0-VMARRAY-replicate-0:
> > Another crawl is in progress for VMARRAY-client-1
>
> > [2015-03-03 23:18:51.042321] I
> > [afr-self-heal-common.c:2868:afr_log_self_heal_completion_status]
> > 0-VMARRAY-replicate-0: foreground data self heal is successfully
completed,
> > data self heal from VMARRAY-client-0 to sinks VMARRAY-client-1, with
> > 53687091200 bytes on VMARRAY-client-0, 53687091200 bytes on
> > VMARRAY-client-1, data - Pending matrix: [ [ 3 3 ] [ 1 1 ] ] on
> > <gfid:86d8d9b4-f0cd-4607-abff-4b01f81d964b>
>
> > The brick log for both look like
>
> > [2015-03-03 23:10:13.831991] I [server.c:520:server_rpc_notify]
> > 0-VMARRAY-server: disconnecting connectionfrom
> > HFMHVR3-51477-2015/02/26-08:07:36:95892-VMARRAY-client-0-0-0
>
> > [2015-03-03 23:10:13.832161] W [inodelk.c:392:pl_inodelk_log_cleanup]
> > 0-VMARRAY-server: releasing lock on
86d8d9b4-f0cd-4607-abff-4b01f81d964b
> > held by {client=0x7fe13076f550, pid=0 lk-owner=4c2fb000487f0000}
>
> > [2015-03-03 23:10:13.832186] W [inodelk.c:392:pl_inodelk_log_cleanup]
> > 0-VMARRAY-server: releasing lock on
86d8d9b4-f0cd-4607-abff-4b01f81d964b
> > held by {client=0x7fe13076f550, pid=0 lk-owner=c883ac00487f0000}
>
> > [2015-03-03 23:10:13.832195] W [inodelk.c:392:pl_inodelk_log_cleanup]
> > 0-VMARRAY-server: releasing lock on
86d8d9b4-f0cd-4607-abff-4b01f81d964b
> > held by {client=0x7fe13076f550, pid=0 lk-owner=44d8a800487f0000}
>
> > [2015-03-03 23:10:13.832203] W [inodelk.c:392:pl_inodelk_log_cleanup]
> > 0-VMARRAY-server: releasing lock on
86d8d9b4-f0cd-4607-abff-4b01f81d964b
> > held by {client=0x7fe13076f550, pid=0 lk-owner=e8cea700487f0000}
>
> > [2015-03-03 23:10:13.832212] W [inodelk.c:392:pl_inodelk_log_cleanup]
> > 0-VMARRAY-server: releasing lock on
86d8d9b4-f0cd-4607-abff-4b01f81d964b
> > held by {client=0x7fe13076f550, pid=0 lk-owner=0477b000487f0000}
>
> > [2015-03-03 23:10:13.832219] W [inodelk.c:392:pl_inodelk_log_cleanup]
> > 0-VMARRAY-server: releasing lock on
86d8d9b4-f0cd-4607-abff-4b01f81d964b
> > held by {client=0x7fe13076f550, pid=0 lk-owner=2c2ba100487f0000}
>
> > [2015-03-03 23:10:13.832227] W [inodelk.c:392:pl_inodelk_log_cleanup]
> > 0-VMARRAY-server: releasing lock on
86d8d9b4-f0cd-4607-abff-4b01f81d964b
> > held by {client=0x7fe13076f550, pid=0 lk-owner=4cfab100487f0000}
>
> > [2015-03-03 23:10:13.832235] W [inodelk.c:392:pl_inodelk_log_cleanup]
> > 0-VMARRAY-server: releasing lock on
86d8d9b4-f0cd-4607-abff-4b01f81d964b
> > held by {client=0x7fe13076f550, pid=0 lk-owner=6c83a200487f0000}
>
> > [2015-03-03 23:10:13.832245] W [inodelk.c:392:pl_inodelk_log_cleanup]
> > 0-VMARRAY-server: releasing lock on
86d8d9b4-f0cd-4607-abff-4b01f81d964b
> > held by {client=0x7fe13076f550, pid=0 lk-owner=0454a000487f0000}
>
> > [2015-03-03 23:10:13.832255] W [inodelk.c:392:pl_inodelk_log_cleanup]
> > 0-VMARRAY-server: releasing lock on
86d8d9b4-f0cd-4607-abff-4b01f81d964b
> > held by {client=0x7fe13076f550, pid=0 lk-owner=a0e1a900487f0000}
>
> > [2015-03-03 23:10:13.832262] W [inodelk.c:392:pl_inodelk_log_cleanup]
> > 0-VMARRAY-server: releasing lock on
86d8d9b4-f0cd-4607-abff-4b01f81d964b
> > held by {client=0x7fe13076f550, pid=0 lk-owner=2031a700487f0000}
>
> > [2015-03-03 23:10:13.832270] W [inodelk.c:392:pl_inodelk_log_cleanup]
> > 0-VMARRAY-server: releasing lock on
86d8d9b4-f0cd-4607-abff-4b01f81d964b
> > held by {client=0x7fe13076f550, pid=0 lk-owner=7040ae00487f0000}
>
> > [2015-03-03 23:10:13.832279] W [inodelk.c:392:pl_inodelk_log_cleanup]
> > 0-VMARRAY-server: releasing lock on
86d8d9b4-f0cd-4607-abff-4b01f81d964b
> > held by {client=0x7fe13076f550, pid=0 lk-owner=1832ae00487f0000}
>
> > [2015-03-03 23:10:13.832287] W [inodelk.c:392:pl_inodelk_log_cleanup]
> > 0-VMARRAY-server: releasing lock on
86d8d9b4-f0cd-4607-abff-4b01f81d964b
> > held by {client=0x7fe13076f550, pid=0 lk-owner=68e0af00487f0000}
>
> > [2015-03-03 23:10:13.832294] W [inodelk.c:392:pl_inodelk_log_cleanup]
> > 0-VMARRAY-server: releasing lock on
86d8d9b4-f0cd-4607-abff-4b01f81d964b
> > held by {client=0x7fe13076f550, pid=0 lk-owner=6446b400487f0000}
>
> > [2015-03-03 23:10:13.832302] W [inodelk.c:392:pl_inodelk_log_cleanup]
> > 0-VMARRAY-server: releasing lock on
86d8d9b4-f0cd-4607-abff-4b01f81d964b
> > held by {client=0x7fe13076f550, pid=0 lk-owner=dcdda400487f0000}
>
> > [2015-03-03 23:10:13.832442] I [server-helpers.c:289:do_fd_cleanup]
> > 0-VMARRAY-server: fd cleanup on /HFMMAIL3.img
>
> > [2015-03-03 23:10:13.832541] I [client_t.c:417:gf_client_unref]
> > 0-VMARRAY-server: Shutting down connection
> > HFMHVR3-51477-2015/02/26-08:07:36:95892-VMARRAY-client-0-0-0
>
> Hi Josh,
> qemu-gfapi.log will have the client side log of gluster. Please post log
from
> that file too.
> > _______________________________________________
>
> > Gluster-users mailing list
>
> > Gluster-users at gluster.org
>
> > http://www.gluster.org/mailman/listinfo/gluster-users
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20150318/78c32946/attachment.html>