Displaying 20 results from an estimated 31 matches for "alloc_pages_current".
2016 Jan 15
0
freshclam: page allocation failure: order:0, mode:0x2204010
...64 #1
...
Call Trace:
<IRQ> [<ffffffff81427e39>] dump_stack+0x4b/0x72
[<ffffffff811ebbca>] warn_alloc_failed+0xfa/0x160
[<ffffffff811f00a1>] __alloc_pages_nodemask+0x4b1/0xd70
[<ffffffffa0389402>] ? nvkm_client_ioctl+0x12/0x20 [nouveau]
[<ffffffff8124287b>] alloc_pages_current+0x9b/0x1c0
[<ffffffff8124c518>] new_slab+0x2a8/0x530
[<ffffffff8124dd30>] ___slab_alloc+0x1f0/0x580
[<ffffffff810e80a7>] ? sched_clock_local+0x17/0x80
[<ffffffff8142d958>] ? radix_tree_node_alloc+0x28/0xa0
[<ffffffffa04e548d>] ? intr_complete+0x3d/0xd0 [usbnet]
[...
2013 Feb 03
3
kernel BUG at fs/btrfs/extent-tree.c:6185!
...dge kernel: [<ffffffffa025d9d7>] open_ctree+0x1587/0x1ba0
[btrfs]
Feb 02 13:59:58 Edge kernel: [<ffffffff81255091>] ? disk_name+0x61/0xc0
Feb 02 13:59:58 Edge kernel: [<ffffffffa0236ae3>] btrfs_mount+0x633/0x770
[btrfs]
Feb 02 13:59:58 Edge kernel: [<ffffffff81165f60>] ?
alloc_pages_current+0xb0/0x120
Feb 02 13:59:58 Edge kernel: [<ffffffff81188163>] mount_fs+0x43/0x1b0
Feb 02 13:59:58 Edge kernel: [<ffffffff811a2974>] vfs_kern_mount+0x74/0x110
Feb 02 13:59:58 Edge kernel: [<ffffffff811a2ed4>] do_kern_mount+0x54/0x110
Feb 02 13:59:58 Edge kernel: [<ffffffff811...
2010 Aug 04
1
A reproducible crush of mounting a subvolume
...ink_dcache_for_umount+0x40/0x51
[<ffffffff8112af26>] generic_shutdown_super+0x1f/0xe1
[<ffffffff8112b03d>] kill_anon_super+0x16/0x54
[<ffffffff8112b604>] deactivate_locked_super+0x26/0x46
[<ffffffffa02fdaeb>] btrfs_get_sb+0x360/0x3ec [btrfs]
[<ffffffff81111378>] ? alloc_pages_current+0xb2/0xc2
[<ffffffff8112b85a>] vfs_kern_mount+0xad/0x1a0
[<ffffffff8112b9b5>] do_kern_mount+0x4d/0xef
[<ffffffff81141cbc>] do_mount+0x732/0x78f
[<ffffffff81141f4e>] sys_mount+0x88/0xc2
[<ffffffff81009c72>] system_call_fastpath+0x16/0x1b
Code: 00 00 48 8b 40 28 4c...
2017 Oct 18
2
[PATCH] virtio: avoid possible OOM lockup at virtballoon_oom_notify()
...balloon_size_func [virtio_balloon]
[ 19.534143] Call Trace:
[ 19.535015] dump_stack+0x63/0x87
[ 19.535844] warn_alloc+0x114/0x1c0
[ 19.536667] __alloc_pages_slowpath+0x9a6/0xba7
[ 19.537491] ? sched_clock_cpu+0x11/0xb0
[ 19.538311] __alloc_pages_nodemask+0x26a/0x290
[ 19.539188] alloc_pages_current+0x6a/0xb0
[ 19.540004] balloon_page_enqueue+0x25/0xf0
[ 19.540818] update_balloon_size_func+0xe1/0x260 [virtio_balloon]
[ 19.541626] process_one_work+0x149/0x360
[ 19.542417] worker_thread+0x4d/0x3c0
[ 19.543186] kthread+0x109/0x140
[ 19.543930] ? rescuer_thread+0x380/0x380
[ 19...
2017 Oct 18
2
[PATCH] virtio: avoid possible OOM lockup at virtballoon_oom_notify()
...balloon_size_func [virtio_balloon]
[ 19.534143] Call Trace:
[ 19.535015] dump_stack+0x63/0x87
[ 19.535844] warn_alloc+0x114/0x1c0
[ 19.536667] __alloc_pages_slowpath+0x9a6/0xba7
[ 19.537491] ? sched_clock_cpu+0x11/0xb0
[ 19.538311] __alloc_pages_nodemask+0x26a/0x290
[ 19.539188] alloc_pages_current+0x6a/0xb0
[ 19.540004] balloon_page_enqueue+0x25/0xf0
[ 19.540818] update_balloon_size_func+0xe1/0x260 [virtio_balloon]
[ 19.541626] process_one_work+0x149/0x360
[ 19.542417] worker_thread+0x4d/0x3c0
[ 19.543186] kthread+0x109/0x140
[ 19.543930] ? rescuer_thread+0x380/0x380
[ 19...
2013 Apr 19
14
[GIT PULL] (xen) stable/for-jens-3.10
Hey Jens,
Please in your spare time (if there is such a thing at a conference)
pull this branch:
git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/for-jens-3.10
for your v3.10 branch. Sorry for being so late with this.
<blurb>
It has the ''feature-max-indirect-segments'' implemented in both backend
and frontend. The current problem with the backend and
2019 Jun 13
0
[PATCH 05/22] mm: export alloc_pages_vma
...sertion(+)
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 01600d80ae01..f9023b5fba37 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2098,6 +2098,7 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma,
out:
return page;
}
+EXPORT_SYMBOL_GPL(alloc_pages_vma);
/**
* alloc_pages_current - Allocate pages.
--
2.20.1
2019 Jun 14
1
[PATCH 05/22] mm: export alloc_pages_vma
...c
> index 01600d80ae01..f9023b5fba37 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -2098,6 +2098,7 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma,
> out:
> return page;
> }
> +EXPORT_SYMBOL_GPL(alloc_pages_vma);
>
> /**
> * alloc_pages_current - Allocate pages.
>
2019 Jun 26
0
[PATCH 06/25] mm: export alloc_pages_vma
...1 insertion(+)
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 01600d80ae01..f48569aa1863 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2098,6 +2098,7 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma,
out:
return page;
}
+EXPORT_SYMBOL(alloc_pages_vma);
/**
* alloc_pages_current - Allocate pages.
--
2.20.1
2020 Apr 23
0
[PATCH 40/70] x86/sev-es: Setup per-cpu GHCBs for the runtime handler
...ot the guest with minal RAM assigned?
> >
> > Regards,
> >
> > Joerg
> >
>
> I just want to add some context around this. The call path that lead to the
> failure is like the following:
>
> __alloc_pages_slowpath
> __alloc_pages_nodemask
> alloc_pages_current
> alloc_pages
> split_large_page
> __change_page_attr
> __change_page_attr_set_clr
> __set_memory_enc_dec
> set_memory_decrypted
> sev_es_init_ghcbs
> trap_init -> before mm_init (in init/main.c)
> start_kernel
> x86_64_start_reservations
> x86_64_st...
2012 Jun 27
7
WARNING: at fs/btrfs/free-space-cache.c:1887 after hard shutdown.
...replay_one_dir_item+0xe0/0xe0 [btrfs]
[ 37.645763] [<ffffffffa0f0883d>] open_ctree+0x14ed/0x1ac0 [btrfs]
[ 37.645767] [<ffffffff8121e101>] ? disk_name+0x61/0xc0
[ 37.645773] [<ffffffffa0ee5836>] btrfs_mount+0x5b6/0x6a0 [btrfs]
[ 37.645776] [<ffffffff8113f400>] ? alloc_pages_current+0xb0/0x120
[ 37.645780] [<ffffffff81163533>] mount_fs+0x43/0x1b0
[ 37.645783] [<ffffffff8117d740>] vfs_kern_mount+0x70/0x100
[ 37.645786] [<ffffffff8117dc64>] do_kern_mount+0x54/0x110
[ 37.645788] [<ffffffff8117f55a>] do_mount+0x26a/0x850
[ 37.645791] [<fff...
2017 Oct 18
0
[PATCH] virtio: avoid possible OOM lockup at virtballoon_oom_notify()
...> [ 19.534143] Call Trace:
> [ 19.535015] dump_stack+0x63/0x87
> [ 19.535844] warn_alloc+0x114/0x1c0
> [ 19.536667] __alloc_pages_slowpath+0x9a6/0xba7
> [ 19.537491] ? sched_clock_cpu+0x11/0xb0
> [ 19.538311] __alloc_pages_nodemask+0x26a/0x290
> [ 19.539188] alloc_pages_current+0x6a/0xb0
> [ 19.540004] balloon_page_enqueue+0x25/0xf0
> [ 19.540818] update_balloon_size_func+0xe1/0x260 [virtio_balloon]
> [ 19.541626] process_one_work+0x149/0x360
> [ 19.542417] worker_thread+0x4d/0x3c0
> [ 19.543186] kthread+0x109/0x140
> [ 19.543930] ? res...
2020 Apr 02
0
Stacktrace from 5.4.26 kernel.
...t; [ 785.581255] <IRQ>
> [ 785.581271] dump_stack+0x6d/0x95
> [ 785.581275] warn_alloc+0xfe/0x160
> [ 785.581277] __alloc_pages_slowpath+0xe07/0xe40
> [ 785.581282] ? dev_gro_receive+0x626/0x690
> [ 785.581284] __alloc_pages_nodemask+0x2cd/0x320
> [ 785.581287] alloc_pages_current+0x6a/0xe0
> [ 785.581298] skb_page_frag_refill+0xd4/0x100
> [ 785.581302] try_fill_recv+0x3ed/0x740 [virtio_net]
> [ 785.581304] virtnet_poll+0x31f/0x349 [virtio_net]
> [ 785.581306] net_rx_action+0x140/0x3c0
> [ 785.581313] __do_softirq+0xe4/0x2da
> [ 785.581318] irq_...
2011 Jul 08
5
btrfs hang in flush-btrfs-5
...Jul 8 11:49:40 xback2 kernel: [74920.681032] [<ffffffff8146e867>]
__alloc_pages_direct_compact+0xa7/0x16d
Jul 8 11:49:40 xback2 kernel: [74920.681032] [<ffffffff810deea3>]
__alloc_pages_nodemask+0x46a/0x77f
Jul 8 11:49:40 xback2 kernel: [74920.681032] [<ffffffff81108755>]
alloc_pages_current+0xbe/0xd8
Jul 8 11:49:40 xback2 kernel: [74920.681032] [<ffffffff8111c7aa>] ?
__mem_cgroup_try_charge+0x111/0x480
Jul 8 11:49:40 xback2 kernel: [74920.681032] [<ffffffff8110f902>]
alloc_slab_page+0x1c/0x4d
Jul 8 11:49:40 xback2 kernel: [74920.681032] [<ffffffff81110f4c>]...
2013 Sep 23
6
btrfs: qgroup scan failed with -12
...ea38
[1878432.675821] Call Trace:
[1878432.675874] [<ffffffff81378c52>] dump_stack+0x46/0x58
[1878432.675927] [<ffffffff810af9a4>] warn_alloc_failed+0x110/0x124
[1878432.675981] [<ffffffff810b1fd8>] __alloc_pages_nodemask+0x6a4/0x793
[1878432.676036] [<ffffffff810db7e8>] alloc_pages_current+0xc8/0xe5
[1878432.676098] [<ffffffff810af06c>] __get_free_pages+0x9/0x36
[1878432.676150] [<ffffffff810e27b9>] __kmalloc_track_caller+0x35/0x163
[1878432.676204] [<ffffffff810bde12>] krealloc+0x52/0x8c
[1878432.676265] [<ffffffffa036cdcb>] ulist_add_merge+0xe1/0x14e [bt...
2013 Feb 13
0
Re: Heavy memory leak when using quota groups
..._failed+0xf6/0x150
> [ 5123.800208] [<ffffffff8113e28e>] __alloc_pages_nodemask+0x76e/0x9b0
> [ 5123.800213] [<ffffffff81182945>] ? new_slab+0x125/0x1a0
> [ 5123.800216] [<ffffffff81185c2c>] ? kmem_cache_alloc+0x11c/0x140
> [ 5123.800221] [<ffffffff8117a66a>] alloc_pages_current+0xba/0x170
> [ 5123.800239] [<ffffffffa055f794>] btrfs_clone_extent_buffer+0x64/0xe0 [btrfs]
> [ 5123.800245] [<ffffffffa051fb33>] btrfs_search_old_slot+0xb3/0x940 [btrfs]
> [ 5123.800252] [<ffffffff810f78f7>] ? call_rcu_sched+0x17/0x20
> [ 5123.800263] [<ffffff...
2011 Feb 02
6
Backtrace in xen/next-2.6.38 when running guest
...ack]
[<ffffffff810066d5>] ? xen_force_evtchn_callback+0xd/0xf
[<ffffffff81006c72>] ? check_events+0x12/0x20
[<ffffffff81112e1c>] ? __kmalloc_node_track_caller+0xf8/0x118
[<ffffffffa022c80e>] ? net_tx_build_gops+0x3e2/0x94d [xen_netback]
[<ffffffff81108313>] ? alloc_pages_current+0xb6/0xd0
[<ffffffff813b5047>] ? __alloc_skb+0x8d/0x133
[<ffffffffa022bd1b>] ? netif_alloc_page.isra.14+0x1e/0x54 [xen_netback]
[<ffffffffa022c93f>] ? net_tx_build_gops+0x513/0x94d [xen_netback]
[<ffffffff8102abf9>] ? pvclock_clocksource_read+0x48/0xb7
[<fffffff...
2013 Feb 08
1
GlusterFS OOM Issue
Hello,
I am running GlusterFS version 3.2.7-2~bpo60+1 on Debian 6.0.6. Today, I
have experienced a a glusterfs process cause the server to invoke
oom_killer.
How exactly would I go about investigating this and coming up with a fix?
--
Steve King
Network/Linux Engineer - AdSafe Media
Cisco Certified Network Professional
CompTIA Linux+ Certified Professional
CompTIA A+ Certified Professional
2013 Feb 26
0
Dom0 OOM, page allocation failure
...ce:
kernel: [<ffffffff81117a93>] warn_alloc_failed+0xf3/0x140
kernel: [<ffffffff81084ea3>] ? __wake_up+0x53/0x70
kernel: [<ffffffff8111aa90>] __alloc_pages_slowpath+0x4b0/0x7b0
kernel: [<ffffffff8111afaa>] __alloc_pages_nodemask+0x21a/0x230
kernel: [<ffffffff811571b6>] alloc_pages_current+0xb6/0x120
kernel: [<ffffffff8111778e>] __get_free_pages+0xe/0x50
kernel: [<ffffffff8130c4e1>] xen_swiotlb_alloc_coherent+0x51/0x180
kernel: [<ffffffff8115dc53>] ? kmem_cache_alloc_trace+0xb3/0x230
kernel: [<ffffffff81150735>] pool_alloc_page+0xc5/0x1d0
kernel: [<ffffffff...
2013 Nov 19
5
xenwatch: page allocation failure: order:4, mode:0x10c0d0 xen_netback:xenvif_alloc: Could not allocate netdev for vif16.0
...0xf0/0x140
[54807.449644] [<ffffffff81a661de>] ? __alloc_pages_direct_compact+0x1ac/0x1be
[54807.465164] [<ffffffff8113bfaa>] __alloc_pages_nodemask+0x7aa/0x9d0
[54807.480510] [<ffffffff810ed069>] ? trace_hardirqs_off_caller+0xb9/0x160
[54807.495622] [<ffffffff81175277>] alloc_pages_current+0xb7/0x180
[54807.510530] [<ffffffff81138059>] __get_free_pages+0x9/0x40
[54807.525185] [<ffffffff8117cbdc>] __kmalloc+0x19c/0x1c0
[54807.539538] [<ffffffff8190e9b4>] alloc_netdev_mqs+0x64/0x340
[54807.553814] [<ffffffff8192ac20>] ? alloc_etherdev_mqs+0x20/0x20
[54807.56...