Displaying 11 results from an estimated 11 matches for "folio_put".
2023 Jun 16
0
[PATCH net-next 12/17] ocfs2: Use sendmsg(MSG_SPLICE_PAGES) rather than sendpage()
..._resp = alloc_skb_frag(sizeof(*keep_resp),
+ GFP_KERNEL);
+ if (!keep_resp) {
+ ret = -ENOMEM;
+ goto out;
+ }
+ memset(keep_resp, 0, sizeof(*keep_resp));
+ keep_resp->magic = cpu_to_be16(O2NET_MSG_KEEP_RESP_MAGIC);
+ o2net_sendpage(sc, keep_resp, sizeof(*keep_resp));
+ folio_put(virt_to_folio(keep_resp));
goto out;
case O2NET_MSG_KEEP_RESP_MAGIC:
goto out;
@@ -1439,15 +1448,22 @@ static void o2net_rx_until_empty(struct work_struct *work)
sc_put(sc);
}
-static void o2net_initialize_handshake(void)
+static struct o2net_handshake *o2net_initialize_handshake(vo...
2023 Mar 31
0
[PATCH v3 52/55] ocfs2: Use sendmsg(MSG_SPLICE_PAGES) rather than sendpage()
...page_frag_alloc(NULL, sizeof(*keep_resp),
+ GFP_KERNEL);
+ if (!keep_resp) {
+ ret = -ENOMEM;
+ goto out;
+ }
+ memset(keep_resp, 0, sizeof(*keep_resp));
+ keep_resp->magic = cpu_to_be16(O2NET_MSG_KEEP_RESP_MAGIC);
+ o2net_sendpage(sc, keep_resp, sizeof(*keep_resp));
+ folio_put(virt_to_folio(keep_resp));
goto out;
case O2NET_MSG_KEEP_RESP_MAGIC:
goto out;
@@ -1439,15 +1448,22 @@ static void o2net_rx_until_empty(struct work_struct *work)
sc_put(sc);
}
-static void o2net_initialize_handshake(void)
+static struct o2net_handshake *o2net_initialize_handshake(vo...
2023 Jun 17
0
[PATCH net-next v2 12/17] ocfs2: Use sendmsg(MSG_SPLICE_PAGES) rather than sendpage()
...= alloc_skb_frag(sizeof(*keep_resp),
+ GFP_KERNEL);
+ if (!keep_resp) {
+ ret = -ENOMEM;
+ goto out;
+ }
+ memset(keep_resp, 0, sizeof(*keep_resp));
+ keep_resp->magic =
+ cpu_to_be16(O2NET_MSG_KEEP_RESP_MAGIC);
+ o2net_sendpage(sc, keep_resp, sizeof(*keep_resp));
+ folio_put(virt_to_folio(keep_resp));
goto out;
case O2NET_MSG_KEEP_RESP_MAGIC:
goto out;
@@ -1439,15 +1449,23 @@ static void o2net_rx_until_empty(struct work_struct *work)
sc_put(sc);
}
-static void o2net_initialize_handshake(void)
+static struct o2net_handshake *o2net_initialize_handshake(vo...
2023 Mar 29
0
[RFC PATCH v2 45/48] ocfs2: Use sendmsg(MSG_SPLICE_PAGES) rather than sendpage()
...page_frag_alloc(NULL, sizeof(*keep_resp),
+ GFP_KERNEL);
+ if (!keep_resp) {
+ ret = -ENOMEM;
+ goto out;
+ }
+ memset(keep_resp, 0, sizeof(*keep_resp));
+ keep_resp->magic = cpu_to_be16(O2NET_MSG_KEEP_RESP_MAGIC);
+ o2net_sendpage(sc, keep_resp, sizeof(*keep_resp));
+ folio_put(virt_to_folio(keep_resp));
goto out;
case O2NET_MSG_KEEP_RESP_MAGIC:
goto out;
@@ -1439,15 +1448,22 @@ static void o2net_rx_until_empty(struct work_struct *work)
sc_put(sc);
}
-static void o2net_initialize_handshake(void)
+static struct o2net_handshake *o2net_initialize_handshake(vo...
2023 Mar 30
4
[PATCH v2] mm: Take a page reference when removing device exclusive entries
...t. If the folio is free the entry must
+ * have been removed already. If it happens to have already
+ * been re-allocated after being freed all we do is lock and
+ * unlock it.
+ */
+ if (!folio_try_get(folio))
+ return 0;
+
+ if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags)) {
+ folio_put(folio);
return VM_FAULT_RETRY;
+ }
mmu_notifier_range_init_owner(&range, MMU_NOTIFY_EXCLUSIVE, 0,
vma->vm_mm, vmf->address & PAGE_MASK,
(vmf->address & PAGE_MASK) + PAGE_SIZE, NULL);
@@ -3577,6 +3590,7 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_...
2023 Mar 29
1
[PATCH] mm: Take a page reference when removing device exclusive entries
...pages don't have individual refcounts; all the refcounts are actually
taken on the folio. So this should be:
if (!folio_try_get(folio))
return 0;
(you can fix up the comment yourself)
> + if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags)) {
> + put_page(vmf->page);
folio_put(folio);
> return VM_FAULT_RETRY;
> + }
> mmu_notifier_range_init_owner(&range, MMU_NOTIFY_EXCLUSIVE, 0, vma,
> vma->vm_mm, vmf->address & PAGE_MASK,
> (vmf->address & PAGE_MASK) + PAGE_SIZE, NULL);
> @@ -3637,6 +3648,7 @@ static vm_fault_t remo...
2023 Mar 07
3
remove most callers of write_one_page v4
Hi all,
this series removes most users of the write_one_page API. These helpers
internally call ->writepage which we are gradually removing from the
kernel.
Changes since v3:
- drop all patches merged in v6.3-rc1
- re-add the jfs patch
Changes since v2:
- more minix error handling fixes
Changes since v1:
- drop the btrfs changes (queue up in the btrfs tree)
- drop the finaly move to
2023 Mar 28
3
[PATCH] mm: Take a page reference when removing device exclusive entries
Device exclusive page table entries are used to prevent CPU access to
a page whilst it is being accessed from a device. Typically this is
used to implement atomic operations when the underlying bus does not
support atomic access. When a CPU thread encounters a device exclusive
entry it locks the page and restores the original entry after calling
mmu notifiers to signal drivers that exclusive
2024 Nov 12
1
[RFC PATCH v1 00/10] mm: Introduce and use folio_owner_ops
On 12.11.24 14:53, Jason Gunthorpe wrote:
> On Tue, Nov 12, 2024 at 10:10:06AM +0100, David Hildenbrand wrote:
>> On 12.11.24 06:26, Matthew Wilcox wrote:
>>> On Mon, Nov 11, 2024 at 08:26:54AM +0000, Fuad Tabba wrote:
>>>> Thanks for your comments Jason, and for clarifying my cover letter
>>>> David. I think David has covered everything, and I'll make
2023 Jun 18
11
[PATCH v1 0/5] clean up block_commit_write
*** BLURB HERE ***
Bean Huo (5):
fs/buffer: clean up block_commit_write
fs/buffer.c: convert block_commit_write to return void
ext4: No need to check return value of block_commit_write()
fs/ocfs2: No need to check return value of block_commit_write()
udf: No need to check return value of block_commit_write()
fs/buffer.c | 24 +++++++-----------------
2024 Nov 13
2
[RFC PATCH v1 00/10] mm: Introduce and use folio_owner_ops
...is and operates on the
memdesc's refcount ... if it has one. I don't know if it'll be exported
to modules; I can see uses in the mm code, but I'm not sure if modules
will have a need.
Each memdesc type will have its own function to call to free the memdesc.
So we'll still have folio_put(). But slab does not have, need nor want
a refcount, so it'll just slab_free(). I expect us to keep around a
list of recently-freed memdescs of a particular type with their pages
still attached so that we can allocate them again quickly (or reclaim
them under memory pressure). Once that free...