Joel Fernandes
2025-Dec-04 21:51 UTC
[PATCH v4 0/3] Introduce support for C linked list interfacing and GPU Buddy bindings
This series combines a number of other series which build up to the same goal:
to make it possible to use DRM buddy from nova-core rust code. See links to the
different series below.
The git tree with all patches can be found at:
git://git.kernel.org/pub/scm/linux/kernel/git/jfern/linux.git (tag:
clist-buddy-v4-20251204)
Changes for v4:
- Combined the clist and drm buddy series:
- A rust module (CList) to access circular linked lists from rust code.
- DRM buddy allocator bindings that were originally part of RFC.
- DRM buddy allocator one level up to drivers/gpu/ so it can be used by
GPU drivers (example, nova-core) that have usecases other than DRM.
- Add Rust bindings for the GPU buddy allocator.
Notes from past cover letters about CList:
Introduction
===========This patchset introduces an interface to iterate over doubly circular
linked
lists used in the kernel (allocated by C kernel code). The main usecase is
iterating over the list of blocks provided by the GPU buddy allocator.
The series also moves the DRM buddy allocator one level up and adds Rust
bindings for it, enabling GPU drivers like nova-core to use it.
A question may arise: Why not use rust list.rs for this?
========================================================Rust's list.rs is
used to provide safe intrusive lists for Rust-allocated
items. In doing so, it takes ownership of the items in the list and the links
between list items. However, the usecase for GPU buddy allocator bindings, the
C side allocates the items in the list, and also links the list together. Due
to this, there is an ownership conflict making list.rs not the best abstraction
for this usecase. What we need is a view of the list, not ownership of it.
Further, the list links in a bindings usecase may come from C allocated
objects, not from the Rust side.
Link to v2 (clist only):
https://lore.kernel.org/all/20251111171315.2196103-4-joelagnelf at nvidia.com/
Notes and patches about DRM buddy code movement and DRM buddy bindings:
Link to RFC: https://lore.kernel.org/all/20251030190613.1224287-1-joelagnelf at
nvidia.com/
Link to DRM buddy move discussion:
https://lore.kernel.org/all/20251124234432.1988476-1-joelagnelf at nvidia.com/
Joel Fernandes (3):
rust: clist: Add support to interface with C linked lists
gpu: Move DRM buddy allocator one level up
rust: gpu: Add GPU buddy allocator bindings
Documentation/gpu/drm-mm.rst | 10 +-
MAINTAINERS | 7 +
drivers/gpu/Kconfig | 13 +
drivers/gpu/Makefile | 2 +
drivers/gpu/buddy.c | 1310 +++++++++++++++++
drivers/gpu/drm/Kconfig | 1 +
drivers/gpu/drm/Kconfig.debug | 4 +-
drivers/gpu/drm/amd/amdgpu/Kconfig | 1 +
drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c | 2 +-
.../gpu/drm/amd/amdgpu/amdgpu_res_cursor.h | 12 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c | 80 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h | 20 +-
drivers/gpu/drm/drm_buddy.c | 1287 +---------------
drivers/gpu/drm/i915/Kconfig | 1 +
drivers/gpu/drm/i915/i915_scatterlist.c | 10 +-
drivers/gpu/drm/i915/i915_ttm_buddy_manager.c | 55 +-
drivers/gpu/drm/i915/i915_ttm_buddy_manager.h | 6 +-
.../drm/i915/selftests/intel_memory_region.c | 20 +-
drivers/gpu/drm/tests/Makefile | 1 -
.../gpu/drm/ttm/tests/ttm_bo_validate_test.c | 5 +-
drivers/gpu/drm/ttm/tests/ttm_mock_manager.c | 18 +-
drivers/gpu/drm/ttm/tests/ttm_mock_manager.h | 4 +-
drivers/gpu/drm/xe/Kconfig | 1 +
drivers/gpu/drm/xe/xe_res_cursor.h | 34 +-
drivers/gpu/drm/xe/xe_svm.c | 12 +-
drivers/gpu/drm/xe/xe_ttm_vram_mgr.c | 73 +-
drivers/gpu/drm/xe/xe_ttm_vram_mgr_types.h | 4 +-
drivers/gpu/tests/Makefile | 3 +
.../gpu_buddy_test.c} | 390 ++---
drivers/gpu/tests/gpu_random.c | 48 +
drivers/gpu/tests/gpu_random.h | 28 +
drivers/video/Kconfig | 2 +
include/drm/drm_buddy.h | 163 +-
include/linux/gpu_buddy.h | 177 +++
rust/bindings/bindings_helper.h | 11 +
rust/helpers/gpu.c | 23 +
rust/helpers/helpers.c | 2 +
rust/helpers/list.c | 12 +
rust/kernel/clist.rs | 357 +++++
rust/kernel/gpu/buddy.rs | 527 +++++++
rust/kernel/gpu/mod.rs | 5 +
rust/kernel/lib.rs | 3 +
42 files changed, 2944 insertions(+), 1800 deletions(-)
create mode 100644 drivers/gpu/Kconfig
create mode 100644 drivers/gpu/buddy.c
create mode 100644 drivers/gpu/tests/Makefile
rename drivers/gpu/{drm/tests/drm_buddy_test.c => tests/gpu_buddy_test.c}
(68%)
create mode 100644 drivers/gpu/tests/gpu_random.c
create mode 100644 drivers/gpu/tests/gpu_random.h
create mode 100644 include/linux/gpu_buddy.h
create mode 100644 rust/helpers/gpu.c
create mode 100644 rust/helpers/list.c
create mode 100644 rust/kernel/clist.rs
create mode 100644 rust/kernel/gpu/buddy.rs
create mode 100644 rust/kernel/gpu/mod.rs
--
2.34.1
Joel Fernandes
2025-Dec-04 21:51 UTC
[PATCH v4 1/3] rust: clist: Add support to interface with C linked lists
Add a new module `clist` for working with C's doubly circular linked
lists. Provide low-level iteration over list nodes.
Typed iteration over actual items is provided with a `clist_create`
macro to assist in creation of the `Clist` type.
Signed-off-by: Joel Fernandes <joelagnelf at nvidia.com>
---
MAINTAINERS | 7 +
rust/helpers/helpers.c | 1 +
rust/helpers/list.c | 12 ++
rust/kernel/clist.rs | 357 +++++++++++++++++++++++++++++++++++++++++
rust/kernel/lib.rs | 1 +
5 files changed, 378 insertions(+)
create mode 100644 rust/helpers/list.c
create mode 100644 rust/kernel/clist.rs
diff --git a/MAINTAINERS b/MAINTAINERS
index 5f7aa6a8a9a1..fb2ff877b6d1 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -22666,6 +22666,13 @@ F: rust/kernel/init.rs
F: rust/pin-init/
K: \bpin-init\b|pin_init\b|PinInit
+RUST TO C LIST INTERFACES
+M: Joel Fernandes <joelagnelf at nvidia.com>
+M: Alexandre Courbot <acourbot at nvidia.com>
+L: rust-for-linux at vger.kernel.org
+S: Maintained
+F: rust/kernel/clist.rs
+
RXRPC SOCKETS (AF_RXRPC)
M: David Howells <dhowells at redhat.com>
M: Marc Dionne <marc.dionne at auristor.com>
diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c
index 79c72762ad9c..634fa2386bbb 100644
--- a/rust/helpers/helpers.c
+++ b/rust/helpers/helpers.c
@@ -32,6 +32,7 @@
#include "io.c"
#include "jump_label.c"
#include "kunit.c"
+#include "list.c"
#include "maple_tree.c"
#include "mm.c"
#include "mutex.c"
diff --git a/rust/helpers/list.c b/rust/helpers/list.c
new file mode 100644
index 000000000000..6044979c7a2e
--- /dev/null
+++ b/rust/helpers/list.c
@@ -0,0 +1,12 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Helpers for C Circular doubly linked list implementation.
+ */
+
+#include <linux/list.h>
+
+void rust_helper_list_add_tail(struct list_head *new, struct list_head *head)
+{
+ list_add_tail(new, head);
+}
diff --git a/rust/kernel/clist.rs b/rust/kernel/clist.rs
new file mode 100644
index 000000000000..b4ee3149903a
--- /dev/null
+++ b/rust/kernel/clist.rs
@@ -0,0 +1,357 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! A C doubly circular intrusive linked list interface for rust code.
+//!
+//! # Examples
+//!
+//! ```
+//! use kernel::{
+//! bindings,
+//! clist::init_list_head,
+//! clist_create,
+//! types::Opaque, //
+//! };
+//! # // Create test list with values (0, 10, 20) - normally done by C code but
it is
+//! # // emulated here for doctests using the C bindings.
+//! # use core::mem::MaybeUninit;
+//! #
+//! # /// C struct with embedded `list_head` (typically will be allocated by C
code).
+//! # #[repr(C)]
+//! # pub(crate) struct SampleItemC {
+//! # pub value: i32,
+//! # pub link: bindings::list_head,
+//! # }
+//! #
+//! # let mut head = MaybeUninit::<bindings::list_head>::uninit();
+//! #
+//! # let head = head.as_mut_ptr();
+//! # // SAFETY: head and all the items are test objects allocated in this
scope.
+//! # unsafe { init_list_head(head) };
+//! #
+//! # let mut items = [
+//! # MaybeUninit::<SampleItemC>::uninit(),
+//! # MaybeUninit::<SampleItemC>::uninit(),
+//! # MaybeUninit::<SampleItemC>::uninit(),
+//! # ];
+//! #
+//! # for (i, item) in items.iter_mut().enumerate() {
+//! # let ptr = item.as_mut_ptr();
+//! # // SAFETY: pointers are to allocated test objects with a list_head
field.
+//! # unsafe {
+//! # (*ptr).value = i as i32 * 10;
+//! # // addr_of_mut!() computes address of link directly as link is
uninitialized.
+//! # init_list_head(core::ptr::addr_of_mut!((*ptr).link));
+//! # bindings::list_add_tail(&mut (*ptr).link, head);
+//! # }
+//! # }
+//!
+//! // Rust wrapper for the C struct.
+//! // The list item struct in this example is defined in C code as:
+//! // struct SampleItemC {
+//! // int value;
+//! // struct list_head link;
+//! // };
+//! //
+//! #[repr(transparent)]
+//! pub(crate) struct Item(Opaque<SampleItemC>);
+//!
+//! impl Item {
+//! pub(crate) fn value(&self) -> i32 {
+//! // SAFETY: [`Item`] has same layout as [`SampleItemC`].
+//! unsafe { (*self.0.get()).value }
+//! }
+//! }
+//!
+//! // Create typed [`CList`] from sentinel head.
+//! // SAFETY: head is valid, items are [`SampleItemC`] with embedded `link`
field.
+//! let list = unsafe { clist_create!(head, Item, SampleItemC, link) };
+//!
+//! // Iterate directly over typed items.
+//! let mut found_0 = false;
+//! let mut found_10 = false;
+//! let mut found_20 = false;
+//!
+//! for item in list.iter() {
+//! let val = item.value();
+//! if val == 0 { found_0 = true; }
+//! if val == 10 { found_10 = true; }
+//! if val == 20 { found_20 = true; }
+//! }
+//!
+//! assert!(found_0 && found_10 && found_20);
+//! ```
+
+use core::{
+ iter::FusedIterator,
+ marker::PhantomData, //
+};
+
+use crate::{
+ bindings,
+ types::Opaque, //
+};
+
+use pin_init::PinInit;
+
+/// Initialize a `list_head` object to point to itself.
+///
+/// # Safety
+///
+/// `list` must be a valid pointer to a `list_head` object.
+#[inline]
+pub unsafe fn init_list_head(list: *mut bindings::list_head) {
+ // SAFETY: Caller guarantees `list` is a valid pointer to a `list_head`.
+ unsafe {
+ (*list).next = list;
+ (*list).prev = list;
+ }
+}
+
+/// Wraps a `list_head` object for use in intrusive linked lists.
+///
+/// # Invariants
+///
+/// - [`CListHead`] represents an allocated and valid `list_head` structure.
+/// - Once a ClistHead is created in Rust, it will not be modified by non-Rust
code.
+/// - All `list_head` for individual items are not modified for the lifetime of
[`CListHead`].
+#[repr(transparent)]
+pub struct CListHead(Opaque<bindings::list_head>);
+
+impl CListHead {
+ /// Create a `&CListHead` reference from a raw `list_head` pointer.
+ ///
+ /// # Safety
+ ///
+ /// - `ptr` must be a valid pointer to an allocated and initialized
`list_head` structure.
+ /// - `ptr` must remain valid and unmodified for the lifetime `'a`.
+ #[inline]
+ pub unsafe fn from_raw<'a>(ptr: *mut bindings::list_head) ->
&'a Self {
+ // SAFETY:
+ // - [`CListHead`] has same layout as `list_head`.
+ // - `ptr` is valid and unmodified for 'a.
+ unsafe { &*ptr.cast() }
+ }
+
+ /// Get the raw `list_head` pointer.
+ #[inline]
+ pub fn as_raw(&self) -> *mut bindings::list_head {
+ self.0.get()
+ }
+
+ /// Get the next [`CListHead`] in the list.
+ #[inline]
+ pub fn next(&self) -> &Self {
+ let raw = self.as_raw();
+ // SAFETY:
+ // - `self.as_raw()` is valid per type invariants.
+ // - The `next` pointer is guaranteed to be non-NULL.
+ unsafe { Self::from_raw((*raw).next) }
+ }
+
+ /// Get the previous [`CListHead`] in the list.
+ #[inline]
+ pub fn prev(&self) -> &Self {
+ let raw = self.as_raw();
+ // SAFETY:
+ // - self.as_raw() is valid per type invariants.
+ // - The `prev` pointer is guaranteed to be non-NULL.
+ unsafe { Self::from_raw((*raw).prev) }
+ }
+
+ /// Check if this node is linked in a list (not isolated).
+ #[inline]
+ pub fn is_linked(&self) -> bool {
+ let raw = self.as_raw();
+ // SAFETY: self.as_raw() is valid per type invariants.
+ unsafe { (*raw).next != raw && (*raw).prev != raw }
+ }
+
+ /// Fallible pin-initializer that initializes and then calls user closure.
+ ///
+ /// Initializes the list head first, then passes `&CListHead` to the
closure.
+ /// This hides the raw FFI pointer from the user.
+ pub fn try_init<E>(
+ init_func: impl FnOnce(&CListHead) -> Result<(), E>,
+ ) -> impl PinInit<Self, E> {
+ // SAFETY: init_list_head initializes the list_head to point to itself.
+ // After initialization, we create a reference to pass to the closure.
+ unsafe {
+ pin_init::pin_init_from_closure(move |slot: *mut Self| {
+ init_list_head(slot.cast());
+ // SAFETY: slot is now initialized, safe to create reference.
+ init_func(&*slot)
+ })
+ }
+ }
+}
+
+// SAFETY: [`CListHead`] can be sent to any thread.
+unsafe impl Send for CListHead {}
+
+// SAFETY: [`CListHead`] can be shared among threads as it is not modified
+// by non-Rust code per type invariants.
+unsafe impl Sync for CListHead {}
+
+impl PartialEq for CListHead {
+ fn eq(&self, other: &Self) -> bool {
+ self.as_raw() == other.as_raw()
+ }
+}
+
+impl Eq for CListHead {}
+
+/// Low-level iterator over `list_head` nodes.
+///
+/// An iterator used to iterate over a C intrusive linked list (`list_head`).
Caller has to
+/// perform conversion of returned [`CListHead`] to an item (using
`container_of` macro or similar).
+///
+/// # Invariants
+///
+/// [`CListHeadIter`] is iterating over an allocated, initialized and valid
list.
+struct CListHeadIter<'a> {
+ current_head: &'a CListHead,
+ list_head: &'a CListHead,
+}
+
+impl<'a> Iterator for CListHeadIter<'a> {
+ type Item = &'a CListHead;
+
+ #[inline]
+ fn next(&mut self) -> Option<Self::Item> {
+ // Advance to next node.
+ let next = self.current_head.next();
+
+ // Check if we've circled back to the sentinel head.
+ if next == self.list_head {
+ None
+ } else {
+ self.current_head = next;
+ Some(self.current_head)
+ }
+ }
+}
+
+impl<'a> FusedIterator for CListHeadIter<'a> {}
+
+/// A typed C linked list with a sentinel head.
+///
+/// A sentinel head represents the entire linked list and can be used for
+/// iteration over items of type `T`, it is not associated with a specific
item.
+///
+/// The const generic `OFFSET` specifies the byte offset of the `list_head`
field within
+/// the struct that `T` wraps.
+///
+/// # Invariants
+///
+/// - `head` is an allocated and valid C `list_head` structure that is the
list's sentinel.
+/// - `OFFSET` is the byte offset of the `list_head` field within the struct
that `T` wraps.
+/// - All the list's `list_head` nodes are allocated and have valid
next/prev pointers.
+/// - The underlying `list_head` (and entire list) is not modified for the
lifetime `'a`.
+pub struct CList<'a, T, const OFFSET: usize> {
+ head: &'a CListHead,
+ _phantom: PhantomData<&'a T>,
+}
+
+impl<'a, T, const OFFSET: usize> CList<'a, T, OFFSET> {
+ /// Create a typed [`CList`] from a raw sentinel `list_head` pointer.
+ ///
+ /// # Safety
+ ///
+ /// - `ptr` must be a valid pointer to an allocated and initialized
`list_head` structure
+ /// representing a list sentinel.
+ /// - `ptr` must remain valid and unmodified for the lifetime `'a`.
+ /// - The list must contain items where the `list_head` field is at byte
offset `OFFSET`.
+ /// - `T` must be `#[repr(transparent)]` over the C struct.
+ #[inline]
+ pub unsafe fn from_raw(ptr: *mut bindings::list_head) -> Self {
+ Self {
+ // SAFETY: Caller guarantees `ptr` is a valid, sentinel `list_head`
object.
+ head: unsafe { CListHead::from_raw(ptr) },
+ _phantom: PhantomData,
+ }
+ }
+
+ /// Get the raw sentinel `list_head` pointer.
+ #[inline]
+ pub fn as_raw(&self) -> *mut bindings::list_head {
+ self.head.as_raw()
+ }
+
+ /// Check if the list is empty.
+ #[inline]
+ pub fn is_empty(&self) -> bool {
+ let raw = self.as_raw();
+ // SAFETY: self.as_raw() is valid per type invariants.
+ unsafe { (*raw).next == raw }
+ }
+
+ /// Create an iterator over typed items.
+ #[inline]
+ pub fn iter(&self) -> CListIter<'a, T, OFFSET> {
+ CListIter {
+ head_iter: CListHeadIter {
+ current_head: self.head,
+ list_head: self.head,
+ },
+ _phantom: PhantomData,
+ }
+ }
+}
+
+/// High-level iterator over typed list items.
+pub struct CListIter<'a, T, const OFFSET: usize> {
+ head_iter: CListHeadIter<'a>,
+ _phantom: PhantomData<&'a T>,
+}
+
+impl<'a, T, const OFFSET: usize> Iterator for CListIter<'a, T,
OFFSET> {
+ type Item = &'a T;
+
+ fn next(&mut self) -> Option<Self::Item> {
+ let head = self.head_iter.next()?;
+
+ // Convert to item using OFFSET.
+ // SAFETY: `item_ptr` calculation from `OFFSET` (calculated using
offset_of!)
+ // is valid per invariants.
+ Some(unsafe { &*head.as_raw().byte_sub(OFFSET).cast::<T>() })
+ }
+}
+
+impl<'a, T, const OFFSET: usize> FusedIterator for
CListIter<'a, T, OFFSET> {}
+
+/// Create a C doubly-circular linked list interface [`CList`] from a raw
`list_head` pointer.
+///
+/// This macro creates a [`CList<T, OFFSET>`] that can iterate over items
of type `$rust_type`
+/// linked via the `$field` field in the underlying C struct `$c_type`.
+///
+/// # Arguments
+///
+/// - `$head`: Raw pointer to the sentinel `list_head` object (`*mut
bindings::list_head`).
+/// - `$rust_type`: Each item's rust wrapper type.
+/// - `$c_type`: Each item's C struct type that contains the embedded
`list_head`.
+/// - `$field`: The name of the `list_head` field within the C struct.
+///
+/// # Safety
+///
+/// The caller must ensure:
+/// - `$head` is a valid, initialized sentinel `list_head` pointing to a list
that remains
+/// unmodified for the lifetime of the rust [`CList`].
+/// - The list contains items of type `$c_type` linked via an embedded
`$field`.
+/// - `$rust_type` is `#[repr(transparent)]` over `$c_type` or has compatible
layout.
+/// - The macro is called from an unsafe block.
+///
+/// # Examples
+///
+/// Refer to the examples in the [`crate::clist`] module documentation.
+#[macro_export]
+macro_rules! clist_create {
+ ($head:expr, $rust_type:ty, $c_type:ty, $($field:tt).+) => {{
+ // Compile-time check that field path is a list_head.
+ let _: fn(*const $c_type) -> *const $crate::bindings::list_head +
|p| ::core::ptr::addr_of!((*p).$($field).+);
+
+ // Calculate offset and create `CList`.
+ const OFFSET: usize = ::core::mem::offset_of!($c_type, $($field).+);
+ $crate::clist::CList::<$rust_type, OFFSET>::from_raw($head)
+ }};
+}
diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
index c2eea9a2a345..b69cc5ed3b59 100644
--- a/rust/kernel/lib.rs
+++ b/rust/kernel/lib.rs
@@ -72,6 +72,7 @@
pub mod bug;
#[doc(hidden)]
pub mod build_assert;
+pub mod clist;
pub mod clk;
#[cfg(CONFIG_CONFIGFS_FS)]
pub mod configfs;
--
2.34.1
Joel Fernandes
2025-Dec-04 21:51 UTC
[PATCH v4 3/3] rust: gpu: Add GPU buddy allocator bindings
Add safe Rust abstractions over the Linux kernel's GPU buddy
allocator for physical memory management. The GPU buddy allocator
implements a binary buddy system for useful for GPU physical memory
allocation. nova-core will use it for physical memory allocation.
Signed-off-by: Joel Fernandes <joelagnelf at nvidia.com>
---
rust/bindings/bindings_helper.h | 11 +
rust/helpers/gpu.c | 23 ++
rust/helpers/helpers.c | 1 +
rust/kernel/gpu/buddy.rs | 527 ++++++++++++++++++++++++++++++++
rust/kernel/gpu/mod.rs | 5 +
rust/kernel/lib.rs | 2 +
6 files changed, 569 insertions(+)
create mode 100644 rust/helpers/gpu.c
create mode 100644 rust/kernel/gpu/buddy.rs
create mode 100644 rust/kernel/gpu/mod.rs
diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h
index 6b973589a546..86a7e304b7ab 100644
--- a/rust/bindings/bindings_helper.h
+++ b/rust/bindings/bindings_helper.h
@@ -29,6 +29,7 @@
#include <linux/hrtimer_types.h>
#include <linux/acpi.h>
+#include <linux/gpu_buddy.h>
#include <drm/drm_device.h>
#include <drm/drm_drv.h>
#include <drm/drm_file.h>
@@ -112,6 +113,16 @@ const gfp_t RUST_CONST_HELPER_XA_FLAGS_ALLOC =
XA_FLAGS_ALLOC;
const gfp_t RUST_CONST_HELPER_XA_FLAGS_ALLOC1 = XA_FLAGS_ALLOC1;
const vm_flags_t RUST_CONST_HELPER_VM_MERGEABLE = VM_MERGEABLE;
+#if IS_ENABLED(CONFIG_GPU_BUDDY)
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_RANGE_ALLOCATION =
GPU_BUDDY_RANGE_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_TOPDOWN_ALLOCATION =
GPU_BUDDY_TOPDOWN_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CONTIGUOUS_ALLOCATION +
GPU_BUDDY_CONTIGUOUS_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CLEAR_ALLOCATION =
GPU_BUDDY_CLEAR_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CLEARED = GPU_BUDDY_CLEARED;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_TRIM_DISABLE =
GPU_BUDDY_TRIM_DISABLE;
+#endif
+
#if IS_ENABLED(CONFIG_ANDROID_BINDER_IPC_RUST)
#include "../../drivers/android/binder/rust_binder.h"
#include "../../drivers/android/binder/rust_binder_events.h"
diff --git a/rust/helpers/gpu.c b/rust/helpers/gpu.c
new file mode 100644
index 000000000000..415836b86abf
--- /dev/null
+++ b/rust/helpers/gpu.c
@@ -0,0 +1,23 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/gpu_buddy.h>
+
+#ifdef CONFIG_GPU_BUDDY
+
+u64 rust_helper_gpu_buddy_block_offset(const struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_offset(block);
+}
+
+unsigned int rust_helper_gpu_buddy_block_order(struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_order(block);
+}
+
+u64 rust_helper_gpu_buddy_block_size(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_size(mm, block);
+}
+
+#endif /* CONFIG_GPU_BUDDY */
diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c
index 634fa2386bbb..6db7c4c25afa 100644
--- a/rust/helpers/helpers.c
+++ b/rust/helpers/helpers.c
@@ -29,6 +29,7 @@
#include "err.c"
#include "irq.c"
#include "fs.c"
+#include "gpu.c"
#include "io.c"
#include "jump_label.c"
#include "kunit.c"
diff --git a/rust/kernel/gpu/buddy.rs b/rust/kernel/gpu/buddy.rs
new file mode 100644
index 000000000000..3e1a9617e6aa
--- /dev/null
+++ b/rust/kernel/gpu/buddy.rs
@@ -0,0 +1,527 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! GPU buddy allocator bindings.
+//!
+//! C header: [`include/linux/gpu_buddy.h`](srctree/include/linux/gpu_buddy.h)
+//!
+//! This module provides Rust abstractions over the Linux kernel's GPU
buddy
+//! allocator, which implements a binary buddy memory allocator.
+//!
+//! The buddy allocator manages a contiguous address space and allocates blocks
+//! in power-of-two sizes, useful for GPU physical memory management.
+//!
+//! # Examples
+//!
+//! ```
+//! use kernel::{
+//! gpu::buddy::{BuddyFlags, GpuBuddy, GpuBuddyAllocParams,
GpuBuddyParams},
+//! prelude::*,
+//! sizes::*, //
+//! };
+//!
+//! // Create a 1GB buddy allocator with 4KB minimum chunk size.
+//! let mut buddy = GpuBuddy::new(GpuBuddyParams {
+//! physical_memory_size_bytes: SZ_1G as u64,
+//! chunk_size_bytes: SZ_4K as u64,
+//! })?;
+//!
+//! // Verify initial state.
+//! assert_eq!(buddy.size(), SZ_1G as u64);
+//! assert_eq!(buddy.chunk_size(), SZ_4K as u64);
+//! let initial_free = buddy.free_memory_bytes();
+//!
+//! // Base allocation params - reused across tests with field overrides.
+//! let params = GpuBuddyAllocParams {
+//! start_range_address: 0,
+//! end_range_address: 0, // Entire range.
+//! size_bytes: SZ_16M as u64,
+//! min_block_size_bytes: SZ_16M as u64,
+//! buddy_flags: BuddyFlags::try_new(BuddyFlags::RANGE_ALLOCATION)?,
+//! };
+//!
+//! // Test top-down allocation (allocates from highest addresses).
+//! let topdown = buddy.alloc_blocks(GpuBuddyAllocParams {
+//! buddy_flags: BuddyFlags::try_new(BuddyFlags::TOPDOWN_ALLOCATION)?,
+//! ..params
+//! })?;
+//! assert_eq!(buddy.free_memory_bytes(), initial_free - SZ_16M as u64);
+//!
+//! for block in topdown.iter() {
+//! assert_eq!(block.offset(), (SZ_1G - SZ_16M) as u64);
+//! assert_eq!(block.order(), 12); // 2^12 pages
+//! assert_eq!(block.size(), SZ_16M as u64);
+//! }
+//! drop(topdown);
+//! assert_eq!(buddy.free_memory_bytes(), initial_free);
+//!
+//! // Allocate 16MB - should result in a single 16MB block at offset 0.
+//! let allocated = buddy.alloc_blocks(params)?;
+//! assert_eq!(buddy.free_memory_bytes(), initial_free - SZ_16M as u64);
+//!
+//! for block in allocated.iter() {
+//! assert_eq!(block.offset(), 0);
+//! assert_eq!(block.order(), 12); // 2^12 pages
+//! assert_eq!(block.size(), SZ_16M as u64);
+//! }
+//! drop(allocated);
+//! assert_eq!(buddy.free_memory_bytes(), initial_free);
+//!
+//! // Test non-contiguous allocation with fragmented memory.
+//! // Create fragmentation by allocating 4MB blocks at [0,4M) and [8M,12M).
+//! let params_4m = GpuBuddyAllocParams {
+//! end_range_address: SZ_4M as u64,
+//! size_bytes: SZ_4M as u64,
+//! min_block_size_bytes: SZ_4M as u64,
+//! ..params
+//! };
+//! let frag1 = buddy.alloc_blocks(params_4m)?;
+//! assert_eq!(buddy.free_memory_bytes(), initial_free - SZ_4M as u64);
+//!
+//! let frag2 = buddy.alloc_blocks(GpuBuddyAllocParams {
+//! start_range_address: SZ_8M as u64,
+//! end_range_address: (SZ_8M + SZ_4M) as u64,
+//! ..params_4m
+//! })?;
+//! assert_eq!(buddy.free_memory_bytes(), initial_free - SZ_8M as u64);
+//!
+//! // Allocate 8MB without CONTIGUOUS - should return 2 blocks from the holes.
+//! let fragmented = buddy.alloc_blocks(GpuBuddyAllocParams {
+//! end_range_address: SZ_16M as u64,
+//! size_bytes: SZ_8M as u64,
+//! min_block_size_bytes: SZ_4M as u64,
+//! ..params
+//! })?;
+//! assert_eq!(buddy.free_memory_bytes(), initial_free - (SZ_16M) as u64);
+//!
+//! let (mut count, mut total) = (0u32, 0u64);
+//! for block in fragmented.iter() {
+//! // The 8MB allocation should return 2 blocks, each 4MB.
+//! assert_eq!(block.size(), SZ_4M as u64);
+//! total += block.size();
+//! count += 1;
+//! }
+//! assert_eq!(total, SZ_8M as u64);
+//! assert_eq!(count, 2);
+//! drop(fragmented);
+//! drop(frag2);
+//! drop(frag1);
+//! assert_eq!(buddy.free_memory_bytes(), initial_free);
+//!
+//! // Test CONTIGUOUS failure when only fragmented space available.
+//! // Create a small buddy allocator with only 16MB of memory.
+//! let mut small = GpuBuddy::new(GpuBuddyParams {
+//! physical_memory_size_bytes: SZ_16M as u64,
+//! chunk_size_bytes: SZ_4K as u64,
+//! })?;
+//!
+//! // Allocate 4MB blocks at [0,4M) and [8M,12M) to create fragmented memory.
+//! let hole1 = small.alloc_blocks(params_4m)?;
+//! let hole2 = small.alloc_blocks(GpuBuddyAllocParams {
+//! start_range_address: SZ_8M as u64,
+//! end_range_address: (SZ_8M + SZ_4M) as u64,
+//! ..params_4m
+//! })?;
+//!
+//! // 8MB contiguous should fail - only two non-contiguous 4MB holes exist.
+//! let result = small.alloc_blocks(GpuBuddyAllocParams {
+//! size_bytes: SZ_8M as u64,
+//! min_block_size_bytes: SZ_4M as u64,
+//! buddy_flags: BuddyFlags::try_new(BuddyFlags::CONTIGUOUS_ALLOCATION)?,
+//! ..params
+//! });
+//! assert!(result.is_err());
+//! drop(hole2);
+//! drop(hole1);
+//!
+//! # Ok::<(), Error>(())
+//! ```
+
+use crate::{
+ bindings,
+ clist::CListHead,
+ clist_create,
+ error::to_result,
+ new_mutex,
+ prelude::*,
+ sync::{
+ lock::mutex::MutexGuard,
+ Arc,
+ Mutex, //
+ },
+ types::Opaque,
+};
+
+/// Flags for GPU buddy allocator operations.
+///
+/// These flags control the allocation behavior of the buddy allocator.
+#[derive(Clone, Copy, Default, PartialEq, Eq)]
+pub struct BuddyFlags(usize);
+
+impl BuddyFlags {
+ /// Range-based allocation from start to end addresses.
+ pub const RANGE_ALLOCATION: usize = bindings::GPU_BUDDY_RANGE_ALLOCATION;
+
+ /// Allocate from top of address space downward.
+ pub const TOPDOWN_ALLOCATION: usize =
bindings::GPU_BUDDY_TOPDOWN_ALLOCATION;
+
+ /// Allocate physically contiguous blocks.
+ pub const CONTIGUOUS_ALLOCATION: usize =
bindings::GPU_BUDDY_CONTIGUOUS_ALLOCATION;
+
+ /// Request allocation from the cleared (zeroed) memory. The zero'ing
is not
+ /// done by the allocator, but by the caller before freeing old blocks.
+ pub const CLEAR_ALLOCATION: usize = bindings::GPU_BUDDY_CLEAR_ALLOCATION;
+
+ /// Disable trimming of partially used blocks.
+ pub const TRIM_DISABLE: usize = bindings::GPU_BUDDY_TRIM_DISABLE;
+
+ /// Mark blocks as cleared (zeroed) when freeing. When set during free,
+ /// indicates that the caller has already zeroed the memory.
+ pub const CLEARED: usize = bindings::GPU_BUDDY_CLEARED;
+
+ /// Create [`BuddyFlags`] from a raw value with validation.
+ ///
+ /// Use `|` operator to combine flags if needed, before calling this
method.
+ pub fn try_new(flags: usize) -> Result<Self> {
+ // Flags must not exceed u32::MAX to satisfy the GPU buddy allocator C
API.
+ if flags > u32::MAX as usize {
+ return Err(EINVAL);
+ }
+
+ // `TOPDOWN_ALLOCATION` only works without `RANGE_ALLOCATION`. When
both are
+ // set, `TOPDOWN_ALLOCATION` is silently ignored by the allocator.
Reject this.
+ if (flags & Self::RANGE_ALLOCATION) != 0 && (flags &
Self::TOPDOWN_ALLOCATION) != 0 {
+ return Err(EINVAL);
+ }
+
+ Ok(Self(flags))
+ }
+
+ /// Get raw value of the flags.
+ pub(crate) fn as_raw(self) -> usize {
+ self.0
+ }
+}
+
+/// Parameters for creating a GPU buddy allocator.
+#[derive(Clone, Copy)]
+pub struct GpuBuddyParams {
+ /// Total physical memory size managed by the allocator in bytes.
+ pub physical_memory_size_bytes: u64,
+ /// Minimum allocation unit / chunk size in bytes, must be >= 4KB.
+ pub chunk_size_bytes: u64,
+}
+
+/// Parameters for allocating blocks from a GPU buddy allocator.
+#[derive(Clone, Copy)]
+pub struct GpuBuddyAllocParams {
+ /// Start of allocation range in bytes. Use 0 for beginning.
+ pub start_range_address: u64,
+ /// End of allocation range in bytes. Use 0 for entire range.
+ pub end_range_address: u64,
+ /// Total size to allocate in bytes.
+ pub size_bytes: u64,
+ /// Minimum block size for fragmented allocations in bytes.
+ pub min_block_size_bytes: u64,
+ /// Buddy allocator behavior flags.
+ pub buddy_flags: BuddyFlags,
+}
+
+/// Inner structure holding the actual buddy allocator.
+///
+/// # Synchronization
+///
+/// The C `gpu_buddy` API requires synchronization (see
`include/linux/gpu_buddy.h`).
+/// The internal [`GpuBuddyGuard`] ensures that the lock is held for all
+/// allocator and free operations, preventing races between concurrent
allocations
+/// and the freeing that occurs when [`AllocatedBlocks`] is dropped.
+///
+/// # Invariants
+///
+/// The inner [`Opaque`] contains a valid, initialized buddy allocator.
+#[pin_data(PinnedDrop)]
+struct GpuBuddyInner {
+ #[pin]
+ inner: Opaque<bindings::gpu_buddy>,
+ #[pin]
+ lock: Mutex<()>,
+}
+
+impl GpuBuddyInner {
+ /// Create a pin-initializer for the buddy allocator.
+ fn new(params: &GpuBuddyParams) -> impl PinInit<Self, Error> {
+ let size = params.physical_memory_size_bytes;
+ let chunk_size = params.chunk_size_bytes;
+
+ try_pin_init!(Self {
+ inner <- Opaque::try_ffi_init(|ptr| {
+ // SAFETY: ptr points to valid uninitialized memory from the
pin-init
+ // infrastructure. gpu_buddy_init will initialize the
structure.
+ to_result(unsafe { bindings::gpu_buddy_init(ptr, size,
chunk_size) })
+ }),
+ lock <- new_mutex!(()),
+ })
+ }
+
+ /// Lock the mutex and return a guard for accessing the allocator.
+ fn lock(&self) -> GpuBuddyGuard<'_> {
+ GpuBuddyGuard {
+ inner: self,
+ _guard: self.lock.lock(),
+ }
+ }
+}
+
+#[pinned_drop]
+impl PinnedDrop for GpuBuddyInner {
+ fn drop(self: Pin<&mut Self>) {
+ let guard = self.lock();
+
+ // SAFETY: guard provides exclusive access to the allocator.
+ unsafe {
+ bindings::gpu_buddy_fini(guard.as_raw());
+ }
+ }
+}
+
+// SAFETY: [`GpuBuddyInner`] can be sent between threads.
+unsafe impl Send for GpuBuddyInner {}
+
+// SAFETY: [`GpuBuddyInner`] is `Sync` because the internal [`GpuBuddyGuard`]
+// serializes all access to the C allocator, preventing data races.
+unsafe impl Sync for GpuBuddyInner {}
+
+/// Guard that proves the lock is held, enabling access to the allocator.
+///
+/// # Invariants
+///
+/// The inner `_guard` holds the lock for the duration of this guard's
lifetime.
+pub(crate) struct GpuBuddyGuard<'a> {
+ inner: &'a GpuBuddyInner,
+ _guard: MutexGuard<'a, ()>,
+}
+
+impl GpuBuddyGuard<'_> {
+ /// Get a raw pointer to the underlying C `gpu_buddy` structure.
+ fn as_raw(&self) -> *mut bindings::gpu_buddy {
+ self.inner.inner.get()
+ }
+}
+
+/// GPU buddy allocator instance.
+///
+/// This structure wraps the C `gpu_buddy` allocator using reference counting.
+/// The allocator is automatically cleaned up when all references are dropped.
+///
+/// # Invariants
+///
+/// The inner [`Arc`] points to a valid, initialized GPU buddy allocator.
+pub struct GpuBuddy(Arc<GpuBuddyInner>);
+
+impl GpuBuddy {
+ /// Create a new buddy allocator.
+ ///
+ /// Creates a buddy allocator that manages a contiguous address space of
the given
+ /// size, with the specified minimum allocation unit (chunk_size must be at
least 4KB).
+ pub fn new(params: GpuBuddyParams) -> Result<Self> {
+ Ok(Self(Arc::pin_init(
+ GpuBuddyInner::new(¶ms),
+ GFP_KERNEL,
+ )?))
+ }
+
+ /// Get the chunk size (minimum allocation unit).
+ pub fn chunk_size(&self) -> u64 {
+ let guard = self.0.lock();
+ // SAFETY: guard provides exclusive access to the allocator.
+ unsafe { (*guard.as_raw()).chunk_size }
+ }
+
+ /// Get the total managed size.
+ pub fn size(&self) -> u64 {
+ let guard = self.0.lock();
+ // SAFETY: guard provides exclusive access to the allocator.
+ unsafe { (*guard.as_raw()).size }
+ }
+
+ /// Get the available (free) memory in bytes.
+ pub fn free_memory_bytes(&self) -> u64 {
+ let guard = self.0.lock();
+ // SAFETY: guard provides exclusive access to the allocator.
+ unsafe { (*guard.as_raw()).avail }
+ }
+
+ /// Allocate blocks from the buddy allocator.
+ ///
+ /// Returns an [`Arc<AllocatedBlocks>`] structure that owns the
allocated blocks
+ /// and automatically frees them when all references are dropped.
+ pub fn alloc_blocks(&mut self, params: GpuBuddyAllocParams) ->
Result<Arc<AllocatedBlocks>> {
+ let buddy_arc = Arc::clone(&self.0);
+
+ // Create pin-initializer that initializes list and allocates blocks.
+ let init = try_pin_init!(AllocatedBlocks {
+ list <- CListHead::try_init(|list| {
+ // Lock while allocating to serialize with concurrent frees.
+ let guard = buddy_arc.lock();
+
+ // SAFETY: guard provides exclusive access, list is
initialized.
+ to_result(unsafe {
+ bindings::gpu_buddy_alloc_blocks(
+ guard.as_raw(),
+ params.start_range_address,
+ params.end_range_address,
+ params.size_bytes,
+ params.min_block_size_bytes,
+ list.as_raw(),
+ params.buddy_flags.as_raw(),
+ )
+ })
+ }),
+ buddy: Arc::clone(&buddy_arc),
+ flags: params.buddy_flags,
+ });
+
+ Arc::pin_init(init, GFP_KERNEL)
+ }
+}
+
+/// Allocated blocks from the buddy allocator with automatic cleanup.
+///
+/// This structure owns a list of allocated blocks and ensures they are
+/// automatically freed when dropped. Use `iter()` to iterate over all
+/// allocated [`Block`] structures.
+///
+/// # Invariants
+///
+/// - `list` is an initialized, valid list head containing allocated blocks.
+/// - `buddy` references a valid [`GpuBuddyInner`].
+#[pin_data(PinnedDrop)]
+pub struct AllocatedBlocks {
+ #[pin]
+ list: CListHead,
+ buddy: Arc<GpuBuddyInner>,
+ flags: BuddyFlags,
+}
+
+impl AllocatedBlocks {
+ /// Check if the block list is empty.
+ pub fn is_empty(&self) -> bool {
+ // An empty list head points to itself.
+ !self.list.is_linked()
+ }
+
+ /// Iterate over allocated blocks.
+ ///
+ /// Returns an iterator yielding [`AllocatedBlock`] references. The blocks
+ /// are only valid for the duration of the borrow of `self`.
+ pub fn iter(&self) -> impl Iterator<Item =
AllocatedBlock<'_>> + '_ {
+ // SAFETY: list contains gpu_buddy_block items linked via
__bindgen_anon_1.link.
+ let clist = unsafe {
+ clist_create!(
+ self.list.as_raw(),
+ Block,
+ bindings::gpu_buddy_block,
+ __bindgen_anon_1.link
+ )
+ };
+
+ clist
+ .iter()
+ .map(|block| AllocatedBlock { block, alloc: self })
+ }
+}
+
+#[pinned_drop]
+impl PinnedDrop for AllocatedBlocks {
+ fn drop(self: Pin<&mut Self>) {
+ let guard = self.buddy.lock();
+
+ // SAFETY:
+ // - list is valid per the type's invariants.
+ // - guard provides exclusive access to the allocator.
+ // CAST: BuddyFlags were validated to fit in u32 at construction.
+ unsafe {
+ bindings::gpu_buddy_free_list(
+ guard.as_raw(),
+ self.list.as_raw(),
+ self.flags.as_raw() as u32,
+ );
+ }
+ }
+}
+
+/// A GPU buddy block.
+///
+/// Transparent wrapper over C `gpu_buddy_block` structure. This type is
returned
+/// as references from [`CListIter`] during iteration over [`AllocatedBlocks`].
+///
+/// # Invariants
+///
+/// The inner [`Opaque`] contains a valid, allocated `gpu_buddy_block`.
+#[repr(transparent)]
+pub struct Block(Opaque<bindings::gpu_buddy_block>);
+
+impl Block {
+ /// Get a raw pointer to the underlying C block.
+ fn as_raw(&self) -> *mut bindings::gpu_buddy_block {
+ self.0.get()
+ }
+
+ /// Get the block's offset in the address space.
+ pub(crate) fn offset(&self) -> u64 {
+ // SAFETY: self.as_raw() is valid per the type's invariants.
+ unsafe { bindings::gpu_buddy_block_offset(self.as_raw()) }
+ }
+
+ /// Get the block order.
+ pub(crate) fn order(&self) -> u32 {
+ // SAFETY: self.as_raw() is valid per the type's invariants.
+ unsafe { bindings::gpu_buddy_block_order(self.as_raw()) }
+ }
+}
+
+// SAFETY: `Block` is a transparent wrapper over `gpu_buddy_block` which is not
+// modified after allocation. It can be safely sent between threads.
+unsafe impl Send for Block {}
+
+// SAFETY: `Block` is a transparent wrapper over `gpu_buddy_block` which is not
+// modified after allocation. It can be safely shared among threads.
+unsafe impl Sync for Block {}
+
+/// An allocated block with access to the buddy allocator.
+///
+/// This wrapper holds references to the block and the allocation list,
+/// enabling the `size()` method which requires the allocator.
+///
+/// # Invariants
+///
+/// - `block` is a valid reference to an allocated [`Block`].
+/// - `alloc` is a valid reference to the [`AllocatedBlocks`] that owns this
block.
+pub struct AllocatedBlock<'a> {
+ block: &'a Block,
+ alloc: &'a AllocatedBlocks,
+}
+
+impl AllocatedBlock<'_> {
+ /// Get the block's offset in the address space.
+ pub fn offset(&self) -> u64 {
+ self.block.offset()
+ }
+
+ /// Get the block order (size = chunk_size << order).
+ pub fn order(&self) -> u32 {
+ self.block.order()
+ }
+
+ /// Get the block's size in bytes.
+ pub fn size(&self) -> u64 {
+ // Acquire guard to calculate block size since it is calculated from
+ // the chunk size, which requires access to the allocator. While the
chunk size
+ // cannot change after initialization, we still need the guard to gain
access
+ // to the allocator's pointer.
+ let guard = self.alloc.buddy.lock();
+ // SAFETY:
+ // - Guard provides exclusive access to the allocator.
+ // - `block.as_raw()` is a valid pointer per the type's invariants.
+ unsafe { bindings::gpu_buddy_block_size(guard.as_raw(),
self.block.as_raw()) }
+ }
+}
diff --git a/rust/kernel/gpu/mod.rs b/rust/kernel/gpu/mod.rs
new file mode 100644
index 000000000000..8f25e6367edc
--- /dev/null
+++ b/rust/kernel/gpu/mod.rs
@@ -0,0 +1,5 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! GPU subsystem abstractions.
+
+pub mod buddy;
diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
index b69cc5ed3b59..850cbbf4c3e7 100644
--- a/rust/kernel/lib.rs
+++ b/rust/kernel/lib.rs
@@ -95,6 +95,8 @@
pub mod firmware;
pub mod fmt;
pub mod fs;
+#[cfg(CONFIG_GPU_BUDDY)]
+pub mod gpu;
pub mod id_pool;
pub mod init;
pub mod io;
--
2.34.1
Joel Fernandes
2025-Dec-04 21:57 UTC
[PATCH v4 0/3] Introduce support for C linked list interfacing and GPU Buddy bindings
> On Dec 4, 2025, at 4:53?PM, Joel Fernandes <joelagnelf at nvidia.com> wrote: > > ?This series combines a number of other series which build up to the same goal: > to make it possible to use DRM buddy from nova-core rust code. See links to the > different series below. > > The git tree with all patches can be found at: > git://git.kernel.org/pub/scm/linux/kernel/git/jfern/linux.git (tag: clist-buddy-v4-20251204)FYI - this series is rebased on linux-next to reduce conflicts, but I am happy to rebase on another tree if needed/requested. Thanks, - Joel> > Changes for v4: > - Combined the clist and drm buddy series: > - A rust module (CList) to access circular linked lists from rust code. > - DRM buddy allocator bindings that were originally part of RFC. > - DRM buddy allocator one level up to drivers/gpu/ so it can be used by > GPU drivers (example, nova-core) that have usecases other than DRM. > - Add Rust bindings for the GPU buddy allocator. > > Notes from past cover letters about CList: > > Introduction > ===========> This patchset introduces an interface to iterate over doubly circular linked > lists used in the kernel (allocated by C kernel code). The main usecase is > iterating over the list of blocks provided by the GPU buddy allocator. > > The series also moves the DRM buddy allocator one level up and adds Rust > bindings for it, enabling GPU drivers like nova-core to use it. > > A question may arise: Why not use rust list.rs for this? > ========================================================> Rust's list.rs is used to provide safe intrusive lists for Rust-allocated > items. In doing so, it takes ownership of the items in the list and the links > between list items. However, the usecase for GPU buddy allocator bindings, the > C side allocates the items in the list, and also links the list together. Due > to this, there is an ownership conflict making list.rs not the best abstraction > for this usecase. What we need is a view of the list, not ownership of it. > Further, the list links in a bindings usecase may come from C allocated > objects, not from the Rust side. > > Link to v2 (clist only): https://lore.kernel.org/all/20251111171315.2196103-4-joelagnelf at nvidia.com/ > > Notes and patches about DRM buddy code movement and DRM buddy bindings: > > Link to RFC: https://lore.kernel.org/all/20251030190613.1224287-1-joelagnelf at nvidia.com/ > Link to DRM buddy move discussion: https://lore.kernel.org/all/20251124234432.1988476-1-joelagnelf at nvidia.com/ > > Joel Fernandes (3): > rust: clist: Add support to interface with C linked lists > gpu: Move DRM buddy allocator one level up > rust: gpu: Add GPU buddy allocator bindings > > Documentation/gpu/drm-mm.rst | 10 +- > MAINTAINERS | 7 + > drivers/gpu/Kconfig | 13 + > drivers/gpu/Makefile | 2 + > drivers/gpu/buddy.c | 1310 +++++++++++++++++ > drivers/gpu/drm/Kconfig | 1 + > drivers/gpu/drm/Kconfig.debug | 4 +- > drivers/gpu/drm/amd/amdgpu/Kconfig | 1 + > drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c | 2 +- > .../gpu/drm/amd/amdgpu/amdgpu_res_cursor.h | 12 +- > drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c | 80 +- > drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h | 20 +- > drivers/gpu/drm/drm_buddy.c | 1287 +--------------- > drivers/gpu/drm/i915/Kconfig | 1 + > drivers/gpu/drm/i915/i915_scatterlist.c | 10 +- > drivers/gpu/drm/i915/i915_ttm_buddy_manager.c | 55 +- > drivers/gpu/drm/i915/i915_ttm_buddy_manager.h | 6 +- > .../drm/i915/selftests/intel_memory_region.c | 20 +- > drivers/gpu/drm/tests/Makefile | 1 - > .../gpu/drm/ttm/tests/ttm_bo_validate_test.c | 5 +- > drivers/gpu/drm/ttm/tests/ttm_mock_manager.c | 18 +- > drivers/gpu/drm/ttm/tests/ttm_mock_manager.h | 4 +- > drivers/gpu/drm/xe/Kconfig | 1 + > drivers/gpu/drm/xe/xe_res_cursor.h | 34 +- > drivers/gpu/drm/xe/xe_svm.c | 12 +- > drivers/gpu/drm/xe/xe_ttm_vram_mgr.c | 73 +- > drivers/gpu/drm/xe/xe_ttm_vram_mgr_types.h | 4 +- > drivers/gpu/tests/Makefile | 3 + > .../gpu_buddy_test.c} | 390 ++--- > drivers/gpu/tests/gpu_random.c | 48 + > drivers/gpu/tests/gpu_random.h | 28 + > drivers/video/Kconfig | 2 + > include/drm/drm_buddy.h | 163 +- > include/linux/gpu_buddy.h | 177 +++ > rust/bindings/bindings_helper.h | 11 + > rust/helpers/gpu.c | 23 + > rust/helpers/helpers.c | 2 + > rust/helpers/list.c | 12 + > rust/kernel/clist.rs | 357 +++++ > rust/kernel/gpu/buddy.rs | 527 +++++++ > rust/kernel/gpu/mod.rs | 5 + > rust/kernel/lib.rs | 3 + > 42 files changed, 2944 insertions(+), 1800 deletions(-) > create mode 100644 drivers/gpu/Kconfig > create mode 100644 drivers/gpu/buddy.c > create mode 100644 drivers/gpu/tests/Makefile > rename drivers/gpu/{drm/tests/drm_buddy_test.c => tests/gpu_buddy_test.c} (68%) > create mode 100644 drivers/gpu/tests/gpu_random.c > create mode 100644 drivers/gpu/tests/gpu_random.h > create mode 100644 include/linux/gpu_buddy.h > create mode 100644 rust/helpers/gpu.c > create mode 100644 rust/helpers/list.c > create mode 100644 rust/kernel/clist.rs > create mode 100644 rust/kernel/gpu/buddy.rs > create mode 100644 rust/kernel/gpu/mod.rs > > -- > 2.34.1 >