search for: radix_tree_node_alloc

Displaying 11 results from an estimated 11 matches for "radix_tree_node_alloc".

2016 Jan 15
0
freshclam: page allocation failure: order:0, mode:0x2204010
...9402>] ? nvkm_client_ioctl+0x12/0x20 [nouveau] [<ffffffff8124287b>] alloc_pages_current+0x9b/0x1c0 [<ffffffff8124c518>] new_slab+0x2a8/0x530 [<ffffffff8124dd30>] ___slab_alloc+0x1f0/0x580 [<ffffffff810e80a7>] ? sched_clock_local+0x17/0x80 [<ffffffff8142d958>] ? radix_tree_node_alloc+0x28/0xa0 [<ffffffffa04e548d>] ? intr_complete+0x3d/0xd0 [usbnet] [<ffffffff8142d958>] ? radix_tree_node_alloc+0x28/0xa0 [<ffffffff8124e111>] __slab_alloc+0x51/0x90 [<ffffffff8124e3a0>] kmem_cache_alloc+0x250/0x300 [<ffffffff8142d958>] ? radix_tree_node_alloc+0x28...
2017 Dec 18
0
[PATCH v19 3/7] xbitmap: add more operations
...failure to insert an item into a radix tree is not a problem, I >>> think that we don't need to use preloading. >> It also mentions that the preload attempts to allocate sufficient memory to *guarantee* that the next radix tree insertion cannot fail. >> >> If we check radix_tree_node_alloc(), the comments there says "this assumes that the caller has performed appropriate preallocation". > If you read what radix_tree_node_alloc() is doing, you will find that > radix_tree_node_alloc() returns NULL when memory allocation failed. > > I think that "this assumes...
2017 Dec 15
4
[PATCH v19 3/7] xbitmap: add more operations
On Tue, Dec 12, 2017 at 07:55:55PM +0800, Wei Wang wrote: > +int xb_preload_and_set_bit(struct xb *xb, unsigned long bit, gfp_t gfp); I'm struggling to understand when one would use this. The xb_ API requires you to handle your own locking. But specifying GFP flags here implies you can sleep. So ... um ... there's no locking? > +void xb_clear_bit_range(struct xb *xb, unsigned
2017 Dec 15
4
[PATCH v19 3/7] xbitmap: add more operations
On Tue, Dec 12, 2017 at 07:55:55PM +0800, Wei Wang wrote: > +int xb_preload_and_set_bit(struct xb *xb, unsigned long bit, gfp_t gfp); I'm struggling to understand when one would use this. The xb_ API requires you to handle your own locking. But specifying GFP flags here implies you can sleep. So ... um ... there's no locking? > +void xb_clear_bit_range(struct xb *xb, unsigned
2014 Nov 17
0
kworker/u16:57: page allocation failure: order:0, mode:0x284000
...lloc_failed+0xd4/0x110 [<c0571ade>] __alloc_pages_nodemask+0x81e/0xc30 [<c04e4b65>] ? clockevents_program_event+0x45/0x150 [<c05b81c8>] new_slab+0x258/0x3a0 [<c05b9490>] __slab_alloc.constprop.55+0x5f0/0x790 [<c0b34636>] ? restore_all+0xf/0xf [<c0755622>] ? radix_tree_node_alloc+0x22/0x90 [<c04a9728>] ? __lock_is_held+0x48/0x70 [<c05bac55>] kmem_cache_alloc+0x295/0x3c0 [<c0755622>] ? radix_tree_node_alloc+0x22/0x90 [<c0755622>] radix_tree_node_alloc+0x22/0x90 [<c0755e7c>] __radix_tree_create+0x6c/0x1c0 [<c0756009>] radix_tree_inser...
2017 Dec 17
0
[PATCH v19 3/7] xbitmap: add more operations
...> problem. > That is, when failure to insert an item into a radix tree is not a problem, I > think that we don't need to use preloading. It also mentions that the preload attempts to allocate sufficient memory to *guarantee* that the next radix tree insertion cannot fail. If we check radix_tree_node_alloc(), the comments there says "this assumes that the caller has performed appropriate preallocation". So, I think we would get a risk of triggering some issue without preload(). > > > > So, I think we can handle the memory failure with xb_preload, which > > stops going in...
2017 Nov 30
0
[PATCH v18 07/10] virtio-balloon: VIRTIO_BALLOON_F_SG
...ly xb_preload(). > Memory allocation by xb_set_bit() will after all emit warnings. Maybe > > xb_init(&vb->page_xb); > vb->page_xb.gfp_mask |= __GFP_NOWARN; > > is tolerable? Or, unconditionally apply __GFP_NOWARN at xb_init()? > Please have a check this one: radix_tree_node_alloc() In our case, I think the code path goes to if (!gfpflags_allow_blocking(gfp_mask) && !in_interrupt()) { ... ret = kmem_cache_alloc(radix_tree_node_cachep, gfp_mask | __GFP_NOWARN); ... goto out; } So I think the __GFP_NOWARN is already there....
2004 Aug 27
1
page allocation failure
...0/0x30 Aug 27 19:27:39 sauron kernel: [<c014d663>] wake_up_buffer+0x13/0x40 Aug 27 19:27:39 sauron kernel: [<c891ebfc>] linvfs_get_block+0x3c/0x40 [xfs] Aug 27 19:27:39 sauron kernel: [<c016e0e5>] do_mpage_readpage+0x385/0x3a0 Aug 27 19:27:39 sauron kernel: [<c01a023f>] radix_tree_node_alloc+0x1f/0x60 Aug 27 19:27:39 sauron kernel: [<c01a04ed>] radix_tree_insert+0xed/0x110 Aug 27 19:27:39 sauron kernel: [<c012fbc8>] add_to_page_cache+0x68/0xb0 Aug 27 19:27:39 sauron kernel: [<c016e23b>] mpage_readpages+0x13b/0x170 Aug 27 19:27:39 sauron kernel: [<c891ebc0>]...
2006 Nov 08
1
XFS Issues
...3 houla0 <ffffffffa020b663>{:xfs:linvfs_get_block +20} <ffffffff80198b43>{do_mpage_readpage+213} Nov 7 12:50:33 houla0 <ffffffffa020b64f>{:xfs:linvfs_get_block +0} <ffffffff8012065d>{flush_gart+210} Nov 7 12:50:33 houla0 <ffffffff801e98e7>{radix_tree_node_alloc +19} <ffffffff801e9aa3>{radix_tree_insert+254} Nov 7 12:50:33 houla0 <ffffffffa020b64f>{:xfs:linvfs_get_block +0} <ffffffffa020b64f>{:xfs:linvfs_get_block+0} Nov 7 12:50:33 houla0 <ffffffff80198e8b>{mpage_readpages+163} <ffffffff801609f0>{read_pag...
2017 Nov 29
22
[PATCH v18 00/10] Virtio-balloon Enhancement
This patch series enhances the existing virtio-balloon with the following new features: 1) fast ballooning: transfer ballooned pages between the guest and host in chunks using sgs, instead of one array each time; and 2) free page block reporting: a new virtqueue to report guest free pages to the host. The second feature can be used to accelerate live migration of VMs. Here are some details: Live
2017 Nov 29
22
[PATCH v18 00/10] Virtio-balloon Enhancement
This patch series enhances the existing virtio-balloon with the following new features: 1) fast ballooning: transfer ballooned pages between the guest and host in chunks using sgs, instead of one array each time; and 2) free page block reporting: a new virtqueue to report guest free pages to the host. The second feature can be used to accelerate live migration of VMs. Here are some details: Live