search for: 4e90

Displaying 9 results from an estimated 9 matches for "4e90".

Did you mean: 490
2017 Dec 21
2
stale file handle on gluster NFS client when trying to remove a directory
...k] 0-g_sitework2-replicate-5: Blocking entrylks failed. [2017-12-21 13:56:01.594350] W [MSGID: 108019] [afr-lk-common.c:1064:afr_log_entry_locks_failure] 0-g_sitework2-replicate-4: Unable to obtain sufficient blocking entry locks on at least one child while attempting RMDIR on {pgfid:23558c59-87e5-4e90-a610-8a47ec08b27c, name:csrc}. [2017-12-21 13:56:01.594648] I [MSGID: 108019] [afr-transaction.c:1903:afr_post_blocking_entrylk_cbk] 0-g_sitework2-replicate-4: Blocking entrylks failed. [2017-12-21 13:56:01.594790] W [MSGID: 112032] [nfs3.c:3713:nfs3svc_rmdir_cbk] 0-nfs: df521f4d: <gfid:23558c...
2018 Jan 03
0
stale file handle on gluster NFS client when trying to remove a directory
...te-5: Blocking entrylks failed. > > [2017-12-21 13:56:01.594350] W [MSGID: 108019] > [afr-lk-common.c:1064:afr_log_entry_locks_failure] > 0-g_sitework2-replicate-4: Unable to obtain sufficient blocking entry locks > on at least one child while attempting RMDIR on {pgfid:23558c59-87e5-4e90-a610-8a47ec08b27c, > name:csrc}. > > [2017-12-21 13:56:01.594648] I [MSGID: 108019] [afr-transaction.c:1903:afr_post_blocking_entrylk_cbk] > 0-g_sitework2-replicate-4: Blocking entrylks failed. > > [2017-12-21 13:56:01.594790] W [MSGID: 112032] [nfs3.c:3713:nfs3svc_rmdir_cbk] >...
2018 Jan 03
1
stale file handle on gluster NFS client when trying to remove a directory
...d. >> >> [2017-12-21 13:56:01.594350] W [MSGID: 108019] >> [afr-lk-common.c:1064:afr_log_entry_locks_failure] >> 0-g_sitework2-replicate-4: Unable to obtain sufficient blocking entry locks >> on at least one child while attempting RMDIR on >> {pgfid:23558c59-87e5-4e90-a610-8a47ec08b27c, name:csrc}. >> >> [2017-12-21 13:56:01.594648] I [MSGID: 108019] >> [afr-transaction.c:1903:afr_post_blocking_entrylk_cbk] >> 0-g_sitework2-replicate-4: Blocking entrylks failed. >> >> [2017-12-21 13:56:01.594790] W [MSGID: 112032] >> [nf...
2011 Aug 04
0
Local delivery via deliver fails for 1 user in alias
...auth input: gid=20 deliver(greg.woods): Aug 03 15:20:32 Info: auth input: quota=maildir:User quota:noenforcing deliver(greg.woods): Aug 03 15:20:32 Info: auth input: quota_rule=*:storage=0 deliver(greg.woods): Aug 03 15:20:32 Info: auth input: mail=maildir:/var/spool/imap/dovecot/mail/C730546B-FBEF-4E90-92CB-6F95AD8F0639 deliver(greg.woods): Aug 03 15:20:32 Info: auth input: mail_location=maildir:/var/spool/imap/dovecot/mail/C730546B-FBEF-4E90-92CB-6F95AD8F0639 deliver(greg.woods): Aug 03 15:20:32 Info: auth input: sieve=/var/spool/imap/dovecot/sieve-scripts/C730546B-FBEF-4E90-92CB-6F95AD8F0639/do...
2011 Sep 26
4
Hard I/O lockup with EL6
...s. Dump of dmesg: Initializing cgroup subsys cpuset Initializing cgroup subsys cpu Linux version 2.6.32-71.29.1.el6.x86_64 (mockbuild at c6b5.bsys.dev.centos.org) (gcc version 4.4.4 20100726 (Red Hat 4.4.4-13) (GCC) ) #1 SMP Mon Jun 27 19:49:27 BST 2011 Command line: ro root=UUID=3653eebb-f6b7-4e90-8365-26f4eccaa960 rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us crashkernel=auto rhgb quiet acpi=off KERNEL supported cpus: Intel GenuineIntel AMD AuthenticAMD Centaur CentaurHauls BIOS-provided physical RAM map: BIOS-e820: 00...
2012 Jun 24
0
nouveau _BIOS method
...e50: 90 93 52 50 38 44 00 5c 2f 04 5f 53 42 5f 50 43 ..RP8D.\/._SB_PC 4e60: 49 30 52 50 30 38 48 50 53 58 5b 22 0a 64 a0 4a I0RP08HPSX[".d.J 4e70: 06 5c 2f 04 5f 53 42 5f 50 43 49 30 52 50 30 38 .\/._SB_PCI0RP08 4e80: 50 44 43 58 70 01 5c 2f 04 5f 53 42 5f 50 43 49 PDCXp.\/._SB_PCI 4e90: 30 52 50 30 38 50 44 43 58 70 01 5c 2f 04 5f 53 0RP08PDCXp.\/._S 4ea0: 42 5f 50 43 49 30 52 50 30 38 48 50 53 58 a0 2a B_PCI0RP08HPSX.* 4eb0: 92 5c 2f 04 5f 53 42 5f 50 43 49 30 52 50 30 38 .\/._SB_PCI0RP08 4ec0: 50 44 53 58 70 00 5c 2f 04 5f 53 42 5f 50 43 49 PDSXp.\/._SB_PCI 4ed0: 30...
2017 Oct 26
0
not healing one file
Hey Richard, Could you share the following informations please? 1. gluster volume info <volname> 2. getfattr output of that file from all the bricks getfattr -d -e hex -m . <brickpath/filepath> 3. glustershd & glfsheal logs Regards, Karthik On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote: > On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it does diagnose any issues in setup. Currently you may have to run it in all the three machines. On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote: > Thanks for this report. This week many of the developers are at Gluster > Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...eal-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed data selfheal on a5102a29-2f9e-4850-993c-5b9cc0a56e41. sources=0 [2] sinks=1 [2017-10-25 10:40:18.200615] I [MSGID: 108026] [afr-self-heal-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed data selfheal on e79be2ce-ed6b-4e90-8546-51490badbdc2. sources=0 [2] sinks=1 [2017-10-25 10:40:18.241840] I [MSGID: 108026] [afr-self-heal-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed data selfheal on 0834ca27-5c08-43a7-89e8-fa27b279c67b. sources=0 [2] sinks=1 [2017-10-25 10:40:18.255870] I [MSGID: 108026] [afr-s...