Displaying 20 results from an estimated 51 matches for "vfs_ioctl".
2019 Jun 13
2
memory leak in vhost_net_ioctl
...[inline]
[<0000000079ebab38>] vhost_net_ubuf_alloc drivers/vhost/net.c:241 [inline]
[<0000000079ebab38>] vhost_net_set_backend drivers/vhost/net.c:1534 [inline]
[<0000000079ebab38>] vhost_net_ioctl+0xb43/0xc10 drivers/vhost/net.c:1716
[<000000009f6204a2>] vfs_ioctl fs/ioctl.c:46 [inline]
[<000000009f6204a2>] file_ioctl fs/ioctl.c:509 [inline]
[<000000009f6204a2>] do_vfs_ioctl+0x62a/0x810 fs/ioctl.c:696
[<00000000b45866de>] ksys_ioctl+0x86/0xb0 fs/ioctl.c:713
[<00000000dfb41eb8>] __do_sys_ioctl fs/ioctl.c:720 [inline...
2019 Jun 13
2
memory leak in vhost_net_ioctl
...[inline]
[<0000000079ebab38>] vhost_net_ubuf_alloc drivers/vhost/net.c:241 [inline]
[<0000000079ebab38>] vhost_net_set_backend drivers/vhost/net.c:1534 [inline]
[<0000000079ebab38>] vhost_net_ioctl+0xb43/0xc10 drivers/vhost/net.c:1716
[<000000009f6204a2>] vfs_ioctl fs/ioctl.c:46 [inline]
[<000000009f6204a2>] file_ioctl fs/ioctl.c:509 [inline]
[<000000009f6204a2>] do_vfs_ioctl+0x62a/0x810 fs/ioctl.c:696
[<00000000b45866de>] ksys_ioctl+0x86/0xb0 fs/ioctl.c:713
[<00000000dfb41eb8>] __do_sys_ioctl fs/ioctl.c:720 [inline...
2019 Jun 14
2
memory leak in vhost_net_ioctl
...<00000000b3825d52>] vhost_net_ubuf_alloc drivers/vhost/net.c:241 [inline]
> [<00000000b3825d52>] vhost_net_set_backend drivers/vhost/net.c:1535 [inline]
> [<00000000b3825d52>] vhost_net_ioctl+0xb43/0xc10 drivers/vhost/net.c:1717
> [<00000000700f02d7>] vfs_ioctl fs/ioctl.c:46 [inline]
> [<00000000700f02d7>] file_ioctl fs/ioctl.c:509 [inline]
> [<00000000700f02d7>] do_vfs_ioctl+0x62a/0x810 fs/ioctl.c:696
> [<000000009a0ec0a7>] ksys_ioctl+0x86/0xb0 fs/ioctl.c:713
> [<00000000d9416323>] __do_sys_ioctl fs...
2019 Jun 14
2
memory leak in vhost_net_ioctl
...<00000000b3825d52>] vhost_net_ubuf_alloc drivers/vhost/net.c:241 [inline]
> [<00000000b3825d52>] vhost_net_set_backend drivers/vhost/net.c:1535 [inline]
> [<00000000b3825d52>] vhost_net_ioctl+0xb43/0xc10 drivers/vhost/net.c:1717
> [<00000000700f02d7>] vfs_ioctl fs/ioctl.c:46 [inline]
> [<00000000700f02d7>] file_ioctl fs/ioctl.c:509 [inline]
> [<00000000700f02d7>] do_vfs_ioctl+0x62a/0x810 fs/ioctl.c:696
> [<000000009a0ec0a7>] ksys_ioctl+0x86/0xb0 fs/ioctl.c:713
> [<00000000d9416323>] __do_sys_ioctl fs...
2019 Jun 06
1
memory leak in vhost_net_ioctl
...<0000000079ebab38>] vhost_net_ubuf_alloc drivers/vhost/net.c:241 [inline]
> [<0000000079ebab38>] vhost_net_set_backend drivers/vhost/net.c:1534 [inline]
> [<0000000079ebab38>] vhost_net_ioctl+0xb43/0xc10 drivers/vhost/net.c:1716
> [<000000009f6204a2>] vfs_ioctl fs/ioctl.c:46 [inline]
> [<000000009f6204a2>] file_ioctl fs/ioctl.c:509 [inline]
> [<000000009f6204a2>] do_vfs_ioctl+0x62a/0x810 fs/ioctl.c:696
> [<00000000b45866de>] ksys_ioctl+0x86/0xb0 fs/ioctl.c:713
> [<00000000dfb41eb8>] __do_sys_ioctl fs...
2019 Jun 06
1
memory leak in vhost_net_ioctl
...<0000000079ebab38>] vhost_net_ubuf_alloc drivers/vhost/net.c:241 [inline]
> [<0000000079ebab38>] vhost_net_set_backend drivers/vhost/net.c:1534 [inline]
> [<0000000079ebab38>] vhost_net_ioctl+0xb43/0xc10 drivers/vhost/net.c:1716
> [<000000009f6204a2>] vfs_ioctl fs/ioctl.c:46 [inline]
> [<000000009f6204a2>] file_ioctl fs/ioctl.c:509 [inline]
> [<000000009f6204a2>] do_vfs_ioctl+0x62a/0x810 fs/ioctl.c:696
> [<00000000b45866de>] ksys_ioctl+0x86/0xb0 fs/ioctl.c:713
> [<00000000dfb41eb8>] __do_sys_ioctl fs...
2019 Jun 05
0
memory leak in vhost_net_ioctl
...e]
[<0000000079ebab38>] vhost_net_ubuf_alloc drivers/vhost/net.c:241
[inline]
[<0000000079ebab38>] vhost_net_set_backend drivers/vhost/net.c:1534
[inline]
[<0000000079ebab38>] vhost_net_ioctl+0xb43/0xc10
drivers/vhost/net.c:1716
[<000000009f6204a2>] vfs_ioctl fs/ioctl.c:46 [inline]
[<000000009f6204a2>] file_ioctl fs/ioctl.c:509 [inline]
[<000000009f6204a2>] do_vfs_ioctl+0x62a/0x810 fs/ioctl.c:696
[<00000000b45866de>] ksys_ioctl+0x86/0xb0 fs/ioctl.c:713
[<00000000dfb41eb8>] __do_sys_ioctl fs/ioctl.c:720 [inline...
2014 Apr 02
2
possible kernel bug?
...>] kvm_arch_vcpu_ioctl_run+0x627/0x10b0 [kvm]
<4> [<ffffffffa028eb04>] kvm_vcpu_ioctl+0x434/0x580 [kvm]
<4> [<ffffffff81060b13>] ? perf_event_task_sched_out+0x33/0x70
<4> [<ffffffff8100bb8e>] ? apic_timer_interrupt+0xe/0x20
<4> [<ffffffff8119dc12>] vfs_ioctl+0x22/0xa0
<4> [<ffffffff8119e0da>] do_vfs_ioctl+0x3aa/0x580
<4> [<ffffffff8100bb8e>] ? apic_timer_interrupt+0xe/0x20
<4> [<ffffffffa029b6ab>] ? kvm_on_user_return+0x7b/0x90 [kvm]
<4> [<ffffffff8119e331>] sys_ioctl+0x81/0xa0
<4> [<ffffffff8100...
2007 Apr 25
1
Problem with SuSe 10.0 and zaptel 1.2.17
...8>] generic_file_aio_write+0x58/0xc0
[<f88f80db>] ext3_file_write+0x1b/0x93 [ext3]
[<c0159466>] do_sync_write+0xb6/0x110
[<f8a7ac33>] zt_ioctl+0x93/0x100 [zaptel]
[<f8a7aba0>] zt_ioctl+0x0/0x100 [zaptel]
[<c0169b5e>] do_ioctl+0x4e/0x60
[<c0169c6f>] vfs_ioctl+0x4f/0x1c0
[<c0169e17>] sys_ioctl+0x37/0x70
[<c0102d3b>] sysenter_past_esp+0x54/0x79
Code: ff 89 f8 89 f1 e8 75 88 77 c7 31 ff c7 85 94 06 00 00 00 00 00 00 e9 77 f4 ff ff 8b 4c 24 24 e9 e4 f8 ff ff 8b 04 95 20 0d aa f8 <8b> 80 9c 00 00 00 e8 5d a9 6c c7 8b 44 24 20 8b 04 85...
2010 Jul 10
1
deadlock possiblity introduced by "drm/nouveau: use drm_mm in preference to custom code doing the same thing"
...[nouveau]
[ 2417.747032] [<ffffffffa00ac534>] nouveau_channel_free+0x141/0x233 [nouveau]
[ 2417.747037] [<ffffffffa00ac695>] nouveau_ioctl_fifo_free+0x6f/0x73 [nouveau]
[ 2417.747043] [<ffffffff81295cc2>] drm_ioctl+0x27b/0x347
[ 2417.747045] [<ffffffff8110c51d>] vfs_ioctl+0x2d/0xa1
[ 2417.747049] [<ffffffff8110ca69>] do_vfs_ioctl+0x454/0x48d
[ 2417.747052] [<ffffffff8110cae4>] sys_ioctl+0x42/0x65
[ 2417.747055] [<ffffffff8102fdab>] system_call_fastpath+0x16/0x1b
[ 2417.747058]
[ 2417.747059]
[ 2417.747060] the dependencies between the lo...
2019 Jun 13
0
memory leak in vhost_net_ioctl
...lloc
>> drivers/vhost/net.c:241 [inline]
>> ???? [<0000000079ebab38>] vhost_net_set_backend
>> drivers/vhost/net.c:1534 [inline]
>> ???? [<0000000079ebab38>] vhost_net_ioctl+0xb43/0xc10
>> drivers/vhost/net.c:1716
>> ???? [<000000009f6204a2>] vfs_ioctl fs/ioctl.c:46 [inline]
>> ???? [<000000009f6204a2>] file_ioctl fs/ioctl.c:509 [inline]
>> ???? [<000000009f6204a2>] do_vfs_ioctl+0x62a/0x810 fs/ioctl.c:696
>> ???? [<00000000b45866de>] ksys_ioctl+0x86/0xb0 fs/ioctl.c:713
>> ???? [<00000000dfb41eb8>] _...
2019 Jun 13
0
memory leak in vhost_net_ioctl
...<0000000079ebab38>] vhost_net_ubuf_alloc drivers/vhost/net.c:241 [inline]
> [<0000000079ebab38>] vhost_net_set_backend drivers/vhost/net.c:1534 [inline]
> [<0000000079ebab38>] vhost_net_ioctl+0xb43/0xc10 drivers/vhost/net.c:1716
> [<000000009f6204a2>] vfs_ioctl fs/ioctl.c:46 [inline]
> [<000000009f6204a2>] file_ioctl fs/ioctl.c:509 [inline]
> [<000000009f6204a2>] do_vfs_ioctl+0x62a/0x810 fs/ioctl.c:696
> [<00000000b45866de>] ksys_ioctl+0x86/0xb0 fs/ioctl.c:713
> [<00000000dfb41eb8>] __do_sys_ioctl fs...
2013 Jul 13
1
btrfs filesystem balance /mnt/btrfs -> segmentation fault (kernel BUG at fs/btrfs/relocation.c:3296!)
...ioctl_balance+0x22e/0x2ac [btrfs]
[18483.578004] [<ffffffffa0721bac>] btrfs_ioctl+0xfab/0x1967 [btrfs]
[18483.578055] [<ffffffff81157d72>] ? avc_has_perm_flags+0x32/0xf7
[18483.578104] [<ffffffff81026887>] ? __do_page_fault+0x34f/0x3f3
[18483.578153] [<ffffffff810f1f51>] vfs_ioctl+0x21/0x34
[18483.578200] [<ffffffff810f27a3>] do_vfs_ioctl+0x3b1/0x3f4
[18483.578248] [<ffffffff810f2838>] SyS_ioctl+0x52/0x82
[18483.578298] [<ffffffff81375812>] system_call_fastpath+0x16/0x1b
[18483.578345] Code: 4c 89 f7 e8 77 b1 ff ff 48 89 c2 31 c0 48 85 d2 75 1b e9 42 ff...
2007 Feb 09
1
USB Problem
...39 mythbox kernel: [<ffffffff80221c91>]
default_wake_function+0x0/0xe
Feb 8 19:09:39 mythbox kernel: [<ffffffff803c6c0e>]
__down_failed+0x35/0x3a
Feb 8 19:09:39 mythbox kernel: [<ffffffff80270cc9>] do_ioctl+0x55/0x6b
Feb 8 19:09:39 mythbox kernel: [<ffffffff80270f31>] vfs_ioctl+0x252/0x26b
Feb 8 19:09:39 mythbox kernel: [<ffffffff80270f86>] sys_ioctl+0x3c/0x5e
Feb 8 19:09:39 mythbox kernel: [<ffffffff802096ee>] system_call+0x7e/0x83
Feb 8 19:09:39 mythbox kernel:
Feb 8 19:09:39 mythbox kernel:
Feb 8 19:09:39 mythbox kernel: Code: 80 7e 40 02 75 17 48 89...
2010 Aug 17
0
Re: [GIT PULL] devel/pat + devel/kms.fixes-0.5 on RV730 PRO [Radeon HD 4650]
...ctl+0x55/0xda [radeon]
> [<ffffffff811d6339>] ? avc_has_perm+0x57/0x69
> [<ffffffffa005d0c7>] drm_ioctl+0x232/0x2ef [drm]
> [<ffffffff8100fab9>] ? __spin_time_accum+0x21/0x37
> [<ffffffff8100fd19>] ?
__xen_spin_lock+0xb7/0xcd
> [<ffffffff8111859c>] vfs_ioctl+0x6a/0x82
> [<ffffffff81118aa8>] do_vfs_ioctl+0x47d/0x4c3
> [<ffffffff81118b3f>] sys_ioctl+0x51/0x74
> [<ffffffff8110a4d7>] ? sys_read+0x5c/0x69
> [<ffffffff81012b42>] system_call_fastpath+0x16/0x1b
> ---[ end trace 5cb030d3ba77eb47 ]---
> ------------[...
2013 Apr 30
1
Panic while running defrag
I ran into a panic while running find -xdev | xargs brtfs fi defrag
''{}''. I don''t remember the exact command because the history was not
saved. I also started and stopped it a few times however.
The kernel logs were on a different filesystem. Here is the
kern.log:http://fpaste.org/9383/36729191/
My setup is two 2TB hard drives in raid 1. They are both sata drives so
2008 Apr 14
8
zaptel 1.4.10 regression with TE220B on Proliant DL380 G5 ?
...[<c04552ee>] do_generic_mapping_read+0x421/0x468
[<c045478b>] file_read_actor+0x0/0xd1
[<c04548e2>] find_get_page+0x18/0x38
[<c0457319>] filemap_nopage+0x192/0x315
[<c046048f>] __handle_mm_fault+0x85e/0x87b
[<c047f46b>] do_ioctl+0x47/0x5d
[<c047f6cb>] vfs_ioctl+0x24a/0x25c
[<c047f725>] sys_ioctl+0x48/0x5f
[<c0404eff>] syscall_call+0x7/0xb
=======================
VPM450: hardware DTMF disabled.
VPM450: Present and operational servicing 2 span(s)
Completed startup!
About to enter startup!
TE2XXP: Span 2 configured for CCS/HDB3/CRC4
wct2xxp: S...
2018 Aug 16
0
Old kernel bug back in CentOS 6.10?
...0 [kvm]
Aug 16 03:10:14 hyper-7 kernel: [265397.638183] [<ffffffffa04c72a1>] ? kvm_vm_ioctl+0x601/0x1050 [kvm]
Aug 16 03:10:14 hyper-7 kernel: [265397.674367] [<ffffffff8113f461>] ? free_one_page+0x191/0x440
Aug 16 03:10:14 hyper-7 kernel: [265397.708101] [<ffffffff811b4159>] ? vfs_ioctl+0x29/0xc0
Aug 16 03:10:14 hyper-7 kernel: [265397.739124] [<ffffffff81142d86>] ? __free_pages+0x46/0xa0
Aug 16 03:10:14 hyper-7 kernel: [265397.773193] [<ffffffff811b463a>] ? do_vfs_ioctl+0x3aa/0x590
Aug 16 03:10:14 hyper-7 kernel: [265397.805774] [<ffffffff81142e29>] ? free_pa...
2007 Apr 18
0
[Bridge] bug(?) in br_device_event causes kernel panic for 2.6.14
...bridge]
...
Process ifconfig (pid 22092, threadinfo:c18e9999 task:c165c600)
Stack: ....(I assume this is ifconfig's stack, and probably not a lot
of interest to the problem...)
Call Trace:
notifier_call_chain
dev_close
dev_change_flags
devinet_ioctl
inet_ioctl
sock_ioctl
do_ioctl
do_page_fault
vfs_ioctl
sys_ioctl
sysenter_past_esp
....
(0)Kernel panic- not syncing: Fatal exception in interrupt
Note- may have some typos, this is actually a PVR box plugged into a
tv- not the easiest to work with ;-)
Hope that's enough info to be useful, feel free to ask for more- so
long as you're willing...
2010 Feb 26
2
[Bug 26767] New: kmemleak complain about possible memory leak
...47
[<c10b5faa>] kmem_cache_alloc+0xa8/0xf6
[<c11a11b5>] idr_pre_get+0x27/0x60
[<f8144ca2>] drm_gem_handle_create+0x25/0x6f [drm]
[<f81f6780>] nouveau_gem_ioctl_new+0x201/0x262 [nouveau]
[<f81436a1>] drm_ioctl+0x268/0x310 [drm]
[<c10c6238>] vfs_ioctl+0x27/0x8c
[<c10c67b4>] do_vfs_ioctl+0x46e/0x4a8
[<c10c6833>] sys_ioctl+0x45/0x5f
[<c100290c>] sysenter_do_call+0x12/0x22
[<ffffffff>] 0xffffffff
As an extra detail, I'm using current linux-2.6 head
(baac35c4155a8aa826c70acee6553368ca5243a2) plus x11 over...