Displaying 6 results from an estimated 6 matches for "lookup_fast".
2017 Oct 18
0
Mounting of Gluster volumes in Kubernetes
...pare_to_wait_event+0xf0/0xf0
[495498.194357] [<ffffffffc02e3679>] 0xffffffffc02e3679
[495498.199519] [<ffffffffc02e723a>] fuse_simple_request+0x11a/0x1e0 [fuse]
[495498.206415] [<ffffffffc02e7f71>] fuse_dev_cleanup+0xa81/0x1ef0 [fuse]
[495498.213151] [<ffffffffa11b91a9>] lookup_fast+0x249/0x330
[495498.218748] [<ffffffffa11b95bd>] walk_component+0x3d/0x500
While the particular issue seems more related to the Fuse client talking to
Gluster, we're wondering if others have seen this type of behavior, if
there are particular troubleshooting/tuning steps we might be adv...
2018 Oct 26
0
systemd automount of cifs share hangs
...man kernel: [<ffffffff85ab2fc0>]
do_expire_wait+0x1e0/0x210
Oct 26 09:11:45 saruman kernel: [<ffffffff85ab31fe>]
autofs4_d_manage+0x7e/0x1d0
Oct 26 09:11:45 saruman kernel: [<ffffffff85a2a37a>]
follow_managed+0xba/0x310
Oct 26 09:11:45 saruman kernel: [<ffffffff85a2b32d>] lookup_fast+0x12d/0x230
Oct 26 09:11:45 saruman kernel: [<ffffffff85a2e0dd>]
path_lookupat+0x16d/0x8b0
Oct 26 09:11:45 saruman kernel: [<ffffffff85f127ba>] ?
avc_alloc_node+0x24/0x123
Oct 26 09:11:45 saruman kernel: [<ffffffff859fadf5>] ?
kmem_cache_alloc+0x35/0x1f0
Oct 26 09:11:45 saruman...
2016 Jul 02
2
truecrypt on synology as subfolder
Am 02.07.2016 um 17:24 schrieb Xen:
> Reindl Harald schreef op 02-07-2016 17:07:
>> that's hardly something which can be changed in the application layer
>> and you have similar problems when write large data to a slow
>> block-device connected with USB
>
> So do you have any info on whether someone else could fix it? I mean,
> there must be kernel people that
2018 Oct 19
2
systemd automount of cifs share hangs
>
> But if I start the automount unit and ls the mount point, the shell hangs
> and eventually, a long time later (I haven't timed it, maybe an hour), I
> eventually get a prompt again. Control-C won't interrupt it. I can still
> ssh in and get another session so it's just the process that's accessing
> the mount point that hangs.
>
I don't have a
2017 Nov 24
0
Changing performance.parallel-readdir to on causes CPU soft lockup and very high load all glusterd nodes
...96591.960270] [<ffffffff811c1988>] ?
> check_submounts_and_drop+0x68/0x90
> Nov 10 20:55:53 n01c01 kernel: [196591.960278] [<ffffffffa017f7f8>] ?
> fuse_dentry_revalidate+0x1e8/0x300 [fuse]
> Nov 10 20:55:53 n01c01 kernel: [196591.960281] [<ffffffff811b4e5e>] ?
> lookup_fast+0x25e/0x2b0
> Nov 10 20:55:53 n01c01 kernel: [196591.960283] [<ffffffff811b5ebb>] ?
> link_path_walk+0x1ab/0x870
> Nov 10 20:55:53 n01c01 kernel: [196591.960285] [<ffffffff811ba2ec>] ?
> path_openat+0x9c/0x680
> Nov 10 20:55:53 n01c01 kernel: [196591.960289] [<fffff...
2017 Nov 14
3
Changing performance.parallel-readdir to on causes CPU soft lockup and very high load all glusterd nodes
...0:55:53 n01c01 kernel: [196591.960270] [<ffffffff811c1988>] ?
check_submounts_and_drop+0x68/0x90
Nov 10 20:55:53 n01c01 kernel: [196591.960278] [<ffffffffa017f7f8>] ?
fuse_dentry_revalidate+0x1e8/0x300 [fuse]
Nov 10 20:55:53 n01c01 kernel: [196591.960281] [<ffffffff811b4e5e>] ?
lookup_fast+0x25e/0x2b0
Nov 10 20:55:53 n01c01 kernel: [196591.960283] [<ffffffff811b5ebb>] ?
link_path_walk+0x1ab/0x870
Nov 10 20:55:53 n01c01 kernel: [196591.960285] [<ffffffff811ba2ec>] ?
path_openat+0x9c/0x680
Nov 10 20:55:53 n01c01 kernel: [196591.960289] [<ffffffff8116c0fc>] ?
handle...