Alistair Popple
2025-Oct-13 06:20 UTC
[PATCH v5 00/14] gpu: nova-core: Boot GSP to RISC-V active
Changes since v4: The main change for this revision is to derive the Zeroable trait for most of our bindings. The rest of the changes just address relatively minor review comments made for v4 of the series. This series is still based on a merge of drm-rust-next and drivers-core-testing made by Alex. A complete copy of the tree with these patches applied will be available at https://github.com/apopple-nvidia/linux/tree/nova-core-for-upstream-v5 Changes since v3: The main change for v4 is to switch to using the `init!` macros to ensure all fields on in-place initialised structs get initialised. This will require our bindings to derive the `Zeroable` trait, however for now I have left this as a TODO with manual implementations for each trait. That is because rebasing the binding changes is a bit of a pain, so I want to give reviewers a change to see if deriving `Zeroable` for all bindings makes sense or not. Other changes include addressing most of the outstanding TODOs left in v3 and addressing review comments from v2 and v3. In particular some of the comments by Timur that had not been picked up. Changes since v2: The main change since v2 has been to make all firmware bindings completely opaque. It has been made clear this is a pre-requisite for this series to progress upstream as it should make supporting different firmware versions easier in future. Overall the extra constructors and accessors add a couple of hundred lines of code and a few extra unsafe statements. Other changes include addressing a bunch of other comments - see the individual patches for further details. There are also still some outstanding comments and TODO's to address which I have not gotten to yet - these will be done in the next version of this series. Changes since v1: - Based on feed back from Alex the GSP command queue logic was reworked extensively. This involved creating a new data struct (DmaGspMem) to manage the shared memory areas between CPU and GSP. - This data structure helps ensure the safety constraints are meet when the CPU is reading/writing the shared memory queues. - Several other minor comments were addressed, as noted in the individual patches. This series builds on top of Alex's series[1], most of which has been merged into drm-rust-next, to continue initialising the GSP into a state where it becomes active and it starts communicating with the host. A tree including these patches with the prerequisite patches is available at [2]. It includes patches to initialise several important data structures required to boot the GSP. The biggest change is the implementation of the command/message circular queue used to establish communication between GSP and host in patch 6. Admittedly this patch is rather large - if necessary it could be split into send and receive patches if people prefer. This is required to configure and boot the GSP. However this series does not get the GSP to a fully active state. Instead it gets it to a state where the GSP sends a message to the host with a sequence of instructions which need running to get to the active state. A subsequent series will implement processing of this message and allow the GSP to get to the fully active state. A full tree including the prerequisites for this patch series is available at https://github.com/apopple-nvidia/linux/tree/nova-core-for-upstream. [1] - https://lore.kernel.org/rust-for-linux/20250911-nova_firmware-v5-0-5a8a33bddca1 at nvidia.com/ [2] - https://github.com/apopple-nvidia/linux/tree/nova-core-for-upstream-v2 To: rust-for-linux at vger.kernel.org To: dri-devel at lists.freedesktop.org To: Danilo Krummrich <dakr at kernel.org> To: Alexandre Courbot <acourbot at nvidia.com> Cc: Miguel Ojeda <ojeda at kernel.org> Cc: Alex Gaynor <alex.gaynor at gmail.com> Cc: Boqun Feng <boqun.feng at gmail.com> Cc: Gary Guo <gary at garyguo.net> Cc: Bj?rn Roy Baron <bjorn3_gh at protonmail.com> Cc: Benno Lossin <lossin at kernel.org> Cc: Andreas Hindborg <a.hindborg at kernel.org> Cc: Alice Ryhl <aliceryhl at google.com> Cc: Trevor Gross <tmgross at umich.edu> Cc: David Airlie <airlied at gmail.com> Cc: Simona Vetter <simona at ffwll.ch> Cc: Maarten Lankhorst <maarten.lankhorst at linux.intel.com> Cc: Maxime Ripard <mripard at kernel.org> Cc: Thomas Zimmermann <tzimmermann at suse.de> Cc: John Hubbard <jhubbard at nvidia.com> Cc: Joel Fernandes <joelagnelf at nvidia.com> Cc: Timur Tabi <ttabi at nvidia.com> Cc: linux-kernel at vger.kernel.org Cc: nouveau at lists.freedesktop.org Alistair Popple (11): gpu: nova-core: Set correct DMA mask gpu: nova-core: Create initial Gsp gpu: nova-core: gsp: Create wpr metadata gpu: nova-core: Add zeroable trait to bindings gpu: nova-core: Add GSP command queue bindings gpu: nova-core: gsp: Add GSP command queue handling gpu: nova-core: gsp: Create rmargs gpu: nova-core: Add bindings and accessors for GspSystemInfo gpu: nova-core: Add bindings for the GSP RM registry tables gpu: nova-core: gsp: Create RM registry and sysinfo commands nova-core: gsp: Boot GSP Joel Fernandes (3): gpu: nova-core: Add a slice-buffer (sbuffer) datastructure nova-core: falcon: Add support to check if RISC-V is active nova-core: falcon: Add support to write firmware version drivers/gpu/nova-core/driver.rs | 16 + drivers/gpu/nova-core/falcon.rs | 15 + drivers/gpu/nova-core/fb.rs | 1 - drivers/gpu/nova-core/firmware/gsp.rs | 3 +- drivers/gpu/nova-core/firmware/riscv.rs | 9 +- drivers/gpu/nova-core/gpu.rs | 2 +- drivers/gpu/nova-core/gsp.rs | 134 +++- drivers/gpu/nova-core/gsp/boot.rs | 76 ++- drivers/gpu/nova-core/gsp/cmdq.rs | 509 +++++++++++++++ drivers/gpu/nova-core/gsp/commands.rs | 115 ++++ drivers/gpu/nova-core/gsp/fw.rs | 455 +++++++++++++- drivers/gpu/nova-core/gsp/fw/commands.rs | 100 +++ drivers/gpu/nova-core/gsp/fw/r570_144.rs | 1 + .../gpu/nova-core/gsp/fw/r570_144/bindings.rs | 592 +++++++++++++++++- drivers/gpu/nova-core/nova_core.rs | 1 + drivers/gpu/nova-core/regs.rs | 17 +- drivers/gpu/nova-core/sbuffer.rs | 215 +++++++ scripts/Makefile.build | 2 +- 18 files changed, 2231 insertions(+), 32 deletions(-) create mode 100644 drivers/gpu/nova-core/gsp/cmdq.rs create mode 100644 drivers/gpu/nova-core/gsp/commands.rs create mode 100644 drivers/gpu/nova-core/gsp/fw/commands.rs create mode 100644 drivers/gpu/nova-core/sbuffer.rs -- 2.50.1
Alistair Popple
2025-Oct-13 06:20 UTC
[PATCH v5 01/14] gpu: nova-core: Set correct DMA mask
Set the correct DMA mask. Without this DMA will fail on some setups.
Signed-off-by: Alistair Popple <apopple at nvidia.com>
---
Changes for v5:
- Update SAFETY comment for dma_set_mask_and_coherent()
- Add TODO for using different masks when we support more GPU models
Changes for v4:
- Use a const (GPU_DMA_BITS) instead of a magic number
Changes for v2:
- Update DMA mask to correct value for Ampere/Turing (47 bits)
---
drivers/gpu/nova-core/driver.rs | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/drivers/gpu/nova-core/driver.rs b/drivers/gpu/nova-core/driver.rs
index edc72052e27a..2407d0ab15e2 100644
--- a/drivers/gpu/nova-core/driver.rs
+++ b/drivers/gpu/nova-core/driver.rs
@@ -3,6 +3,8 @@
use kernel::{
auxiliary, c_str,
device::Core,
+ dma::Device,
+ dma::DmaMask,
pci,
pci::{Class, ClassMask, Vendor},
prelude::*,
@@ -20,6 +22,15 @@ pub(crate) struct NovaCore {
}
const BAR0_SIZE: usize = SZ_16M;
+
+// For now we only support Ampere which can use up to 47-bit DMA addresses.
+//
+// TODO: Add an abstraction for this to support newer GPUs which may support
+// larger DMA addresses. Limiting these GPUs to smaller address widths
won't
+// have any adverse affects, unless installed on systems which require larger
+// DMA addresses. These systems should be quite rare.
+const GPU_DMA_BITS: u32 = 47;
+
pub(crate) type Bar0 = pci::Bar<BAR0_SIZE>;
kernel::pci_device_table!(
@@ -57,6 +68,11 @@ fn probe(pdev: &pci::Device<Core>, _info:
&Self::IdInfo) -> Result<Pin<KBox<Self
pdev.enable_device_mem()?;
pdev.set_master();
+ // SAFETY: No concurrent DMA allocations or mappings can be made
because
+ // the device is still being probed and therefore isn't being used
by
+ // other threads of execution.
+ unsafe {
pdev.dma_set_mask_and_coherent(DmaMask::new::<GPU_DMA_BITS>())? };
+
let devres_bar = Arc::pin_init(
pdev.iomap_region_sized::<BAR0_SIZE>(0,
c_str!("nova-core/bar0")),
GFP_KERNEL,
--
2.50.1
The GSP requires several areas of memory to operate. Each of these have
their own simple embedded page tables. Set these up and map them for DMA
to/from GSP using CoherentAllocation's. Return the DMA handle describing
where each of these regions are for future use when booting GSP.
Signed-off-by: Alistair Popple <apopple at nvidia.com>
---
Changes for v5:
- Move GSP_HEAP_ALIGNMENT to gsp/fw.rs and add a comment.
- Create a LogBuffer type.
- Use checked_add to ensure PTE values don't overflow.
- Added some type documentation (shamelessly stolen from Nouveau)
Change for v3:
- Clean up the PTE array creation, with much thanks to Alex for doing
most it (please let me know if I should put you as co-developer!)
Changes for v2:
- Renamed GspMemOjbects to Gsp as that is what they are
- Rebased on Alex's latest series
---
drivers/gpu/nova-core/gpu.rs | 2 +-
drivers/gpu/nova-core/gsp.rs | 106 ++++++++++++++++--
drivers/gpu/nova-core/gsp/fw.rs | 64 ++++++++++-
.../gpu/nova-core/gsp/fw/r570_144/bindings.rs | 19 ++++
4 files changed, 179 insertions(+), 12 deletions(-)
diff --git a/drivers/gpu/nova-core/gpu.rs b/drivers/gpu/nova-core/gpu.rs
index ea124d1912e7..c1396775e9b6 100644
--- a/drivers/gpu/nova-core/gpu.rs
+++ b/drivers/gpu/nova-core/gpu.rs
@@ -197,7 +197,7 @@ pub(crate) fn new<'a>(
sec2_falcon: Falcon::new(pdev.as_ref(), spec.chipset, bar, true)?,
- gsp <- Gsp::new(),
+ gsp <- Gsp::new(pdev)?,
_: { gsp.boot(pdev, bar, spec.chipset, gsp_falcon, sec2_falcon)? },
diff --git a/drivers/gpu/nova-core/gsp.rs b/drivers/gpu/nova-core/gsp.rs
index 221281da1a45..f1727173bd42 100644
--- a/drivers/gpu/nova-core/gsp.rs
+++ b/drivers/gpu/nova-core/gsp.rs
@@ -2,25 +2,117 @@
mod boot;
+use kernel::device;
+use kernel::dma::CoherentAllocation;
+use kernel::dma::DmaAddress;
+use kernel::dma_write;
+use kernel::pci;
use kernel::prelude::*;
-use kernel::ptr::Alignment;
+use kernel::transmute::AsBytes;
pub(crate) use fw::{GspFwWprMeta, LibosParams};
mod fw;
+use fw::LibosMemoryRegionInitArgument;
+
pub(crate) const GSP_PAGE_SHIFT: usize = 12;
pub(crate) const GSP_PAGE_SIZE: usize = 1 << GSP_PAGE_SHIFT;
-pub(crate) const GSP_HEAP_ALIGNMENT: Alignment = Alignment::new::<{ 1
<< 20 }>();
+
+/// Number of GSP pages to use in a RM log buffer.
+const RM_LOG_BUFFER_NUM_PAGES: usize = 0x10;
/// GSP runtime data.
-///
-/// This is an empty pinned placeholder for now.
#[pin_data]
-pub(crate) struct Gsp {}
+pub(crate) struct Gsp {
+ pub(crate) libos: CoherentAllocation<LibosMemoryRegionInitArgument>,
+ loginit: LogBuffer,
+ logintr: LogBuffer,
+ logrm: LogBuffer,
+}
+
+#[repr(C)]
+struct PteArray<const NUM_ENTRIES: usize>([u64; NUM_ENTRIES]);
+
+/// SAFETY: arrays of `u64` implement `AsBytes` and we are but a wrapper around
it.
+unsafe impl<const NUM_ENTRIES: usize> AsBytes for
PteArray<NUM_ENTRIES> {}
+
+impl<const NUM_PAGES: usize> PteArray<NUM_PAGES> {
+ fn new(handle: DmaAddress) -> Result<Self> {
+ let mut ptes = [0u64; NUM_PAGES];
+ for (i, pte) in ptes.iter_mut().enumerate() {
+ *pte = handle
+ .checked_add((i as u64) << GSP_PAGE_SHIFT)
+ .ok_or(EOVERFLOW)?;
+ }
+
+ Ok(Self(ptes))
+ }
+}
+
+/// The logging buffers are byte queues that contain encoded printf-like
+/// messages from GSP-RM. They need to be decoded by a special application
+/// that can parse the buffers.
+///
+/// The 'loginit' buffer contains logs from early GSP-RM init and
+/// exception dumps. The 'logrm' buffer contains the subsequent logs.
Both are
+/// written to directly by GSP-RM and can be any multiple of GSP_PAGE_SIZE.
+///
+/// The physical address map for the log buffer is stored in the buffer
+/// itself, starting with offset 1. Offset 0 contains the "put"
pointer (pp).
+/// Initially, pp is equal to 0. If the buffer has valid logging data in it,
+/// then pp points to index into the buffer where the next logging entry will
+/// be written. Therefore, the logging data is valid if:
+/// 1 <= pp < sizeof(buffer)/sizeof(u64)
+struct LogBuffer(CoherentAllocation<u8>);
+
+impl LogBuffer {
+ fn new(dev: &device::Device<device::Bound>) ->
Result<Self> {
+ const NUM_PAGES: usize = RM_LOG_BUFFER_NUM_PAGES;
+
+ let mut obj = Self(CoherentAllocation::<u8>::alloc_coherent(
+ dev,
+ NUM_PAGES * GSP_PAGE_SIZE,
+ GFP_KERNEL | __GFP_ZERO,
+ )?);
+ let ptes = PteArray::<NUM_PAGES>::new(obj.0.dma_handle())?;
+
+ // SAFETY: `obj` has just been created and we are its sole user.
+ unsafe {
+ // Copy the self-mapping PTE at the expected location.
+ obj.0
+ .as_slice_mut(size_of::<u64>(), size_of_val(&ptes))?
+ .copy_from_slice(ptes.as_bytes())
+ };
+
+ Ok(obj)
+ }
+}
impl Gsp {
- pub(crate) fn new() -> impl PinInit<Self> {
- pin_init!(Self {})
+ pub(crate) fn new(pdev: &pci::Device<device::Bound>) ->
Result<impl PinInit<Self, Error>> {
+ let dev = pdev.as_ref();
+ let libos =
CoherentAllocation::<LibosMemoryRegionInitArgument>::alloc_coherent(
+ dev,
+ GSP_PAGE_SIZE / size_of::<LibosMemoryRegionInitArgument>(),
+ GFP_KERNEL | __GFP_ZERO,
+ )?;
+
+ // Initialise the logging structures. The OpenRM equivalents are in:
+ // _kgspInitLibosLoggingStructures (allocates memory for buffers)
+ // kgspSetupLibosInitArgs_IMPL (creates pLibosInitArgs[] array)
+ let loginit = LogBuffer::new(dev)?;
+ dma_write!(libos[0] =
LibosMemoryRegionInitArgument::new("LOGINIT", &loginit.0)?)?;
+ let logintr = LogBuffer::new(dev)?;
+ dma_write!(libos[1] =
LibosMemoryRegionInitArgument::new("LOGINTR", &logintr.0)?)?;
+ let logrm = LogBuffer::new(dev)?;
+ dma_write!(libos[2] =
LibosMemoryRegionInitArgument::new("LOGRM", &logrm.0)?)?;
+
+ Ok(try_pin_init!(Self {
+ libos,
+ loginit,
+ logintr,
+ logrm,
+ }))
}
}
diff --git a/drivers/gpu/nova-core/gsp/fw.rs b/drivers/gpu/nova-core/gsp/fw.rs
index 181baa401770..c3bececc29cd 100644
--- a/drivers/gpu/nova-core/gsp/fw.rs
+++ b/drivers/gpu/nova-core/gsp/fw.rs
@@ -7,15 +7,20 @@
use core::ops::Range;
-use kernel::ptr::Alignable;
+use kernel::dma::CoherentAllocation;
+use kernel::prelude::*;
+use kernel::ptr::{Alignable, Alignment};
use kernel::sizes::SZ_1M;
+use kernel::transmute::{AsBytes, FromBytes};
use crate::gpu::Chipset;
-use crate::gsp;
/// Dummy type to group methods related to heap parameters for running the GSP
firmware.
pub(crate) struct GspFwHeapParams(());
+/// Minimum required alignment for the GSP heap.
+const GSP_HEAP_ALIGNMENT: Alignment = Alignment::new::<{ 1 << 20
}>();
+
impl GspFwHeapParams {
/// Returns the amount of GSP-RM heap memory used during GSP-RM boot and
initialization (up to
/// and including the first client subdevice allocation).
@@ -29,7 +34,7 @@ fn base_rm_size(_chipset: Chipset) -> u64 {
/// Returns the amount of heap memory required to support a single channel
allocation.
fn client_alloc_size() -> u64 {
u64::from(bindings::GSP_FW_HEAP_PARAM_CLIENT_ALLOC_SIZE)
- .align_up(gsp::GSP_HEAP_ALIGNMENT)
+ .align_up(GSP_HEAP_ALIGNMENT)
.unwrap_or(u64::MAX)
}
@@ -40,7 +45,7 @@ fn management_overhead(fb_size: u64) -> u64 {
u64::from(bindings::GSP_FW_HEAP_PARAM_SIZE_PER_GB_FB)
.saturating_mul(fb_size_gb)
- .align_up(gsp::GSP_HEAP_ALIGNMENT)
+ .align_up(GSP_HEAP_ALIGNMENT)
.unwrap_or(u64::MAX)
}
}
@@ -99,3 +104,54 @@ pub(crate) fn wpr_heap_size(&self, chipset: Chipset,
fb_size: u64) -> u64 {
/// addresses of the GSP bootloader and firmware.
#[repr(transparent)]
pub(crate) struct GspFwWprMeta(bindings::GspFwWprMeta);
+
+/// Struct containing the arguments required to pass a memory buffer to the GSP
+/// for use during initialisation.
+///
+/// The GSP only understands 4K pages (GSP_PAGE_SIZE), so even if the kernel is
+/// configured for a larger page size (e.g. 64K pages), we need to give
+/// the GSP an array of 4K pages. Since we only create physically contiguous
+/// buffers the math to calculate the addresses is simple.
+///
+/// The buffers must be a multiple of GSP_PAGE_SIZE. GSP-RM also currently
+/// ignores the @kind field for LOGINIT, LOGINTR, and LOGRM, but expects the
+/// buffers to be physically contiguous anyway.
+///
+/// The memory allocated for the arguments must remain until the GSP sends the
+/// init_done RPC.
+#[repr(transparent)]
+pub(crate) struct
LibosMemoryRegionInitArgument(bindings::LibosMemoryRegionInitArgument);
+
+// SAFETY: Padding is explicit and will not contain uninitialized data.
+unsafe impl AsBytes for LibosMemoryRegionInitArgument {}
+
+// SAFETY: This struct only contains integer types for which all bit patterns
+// are valid.
+unsafe impl FromBytes for LibosMemoryRegionInitArgument {}
+
+impl LibosMemoryRegionInitArgument {
+ pub(crate) fn new<A: AsBytes + FromBytes>(
+ name: &'static str,
+ obj: &CoherentAllocation<A>,
+ ) -> Result<Self> {
+ /// Generates the `ID8` identifier required for some GSP objects.
+ fn id8(name: &str) -> u64 {
+ let mut bytes = [0u8; core::mem::size_of::<u64>()];
+
+ for (c, b) in name.bytes().rev().zip(&mut bytes) {
+ *b = c;
+ }
+
+ u64::from_ne_bytes(bytes)
+ }
+
+ Ok(Self(bindings::LibosMemoryRegionInitArgument {
+ id8: id8(name),
+ pa: obj.dma_handle(),
+ size: obj.size() as u64,
+ kind:
bindings::LibosMemoryRegionKind_LIBOS_MEMORY_REGION_CONTIGUOUS.try_into()?,
+ loc:
bindings::LibosMemoryRegionLoc_LIBOS_MEMORY_REGION_LOC_SYSMEM.try_into()?,
+ ..Default::default()
+ }))
+ }
+}
diff --git a/drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs
b/drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs
index 0407000cca22..6a14cc324391 100644
--- a/drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs
+++ b/drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs
@@ -124,3 +124,22 @@ fn default() -> Self {
}
}
}
+pub type LibosAddress = u64_;
+pub const LibosMemoryRegionKind_LIBOS_MEMORY_REGION_NONE: LibosMemoryRegionKind
= 0;
+pub const LibosMemoryRegionKind_LIBOS_MEMORY_REGION_CONTIGUOUS:
LibosMemoryRegionKind = 1;
+pub const LibosMemoryRegionKind_LIBOS_MEMORY_REGION_RADIX3:
LibosMemoryRegionKind = 2;
+pub type LibosMemoryRegionKind = ffi::c_uint;
+pub const LibosMemoryRegionLoc_LIBOS_MEMORY_REGION_LOC_NONE:
LibosMemoryRegionLoc = 0;
+pub const LibosMemoryRegionLoc_LIBOS_MEMORY_REGION_LOC_SYSMEM:
LibosMemoryRegionLoc = 1;
+pub const LibosMemoryRegionLoc_LIBOS_MEMORY_REGION_LOC_FB: LibosMemoryRegionLoc
= 2;
+pub type LibosMemoryRegionLoc = ffi::c_uint;
+#[repr(C)]
+#[derive(Debug, Default, Copy, Clone)]
+pub struct LibosMemoryRegionInitArgument {
+ pub id8: LibosAddress,
+ pub pa: LibosAddress,
+ pub size: LibosAddress,
+ pub kind: u8_,
+ pub loc: u8_,
+ pub __bindgen_padding_0: [u8; 6usize],
+}
--
2.50.1
Alistair Popple
2025-Oct-13 06:20 UTC
[PATCH v5 03/14] gpu: nova-core: gsp: Create wpr metadata
The GSP requires some pieces of metadata to boot. These are passed in a
struct which the GSP transfers via DMA. Create this struct and get a
handle to it for future use when booting the GSP.
Signed-off-by: Alistair Popple <apopple at nvidia.com>
---
Changes for v5:
- Make member visibility match the struct visibility (thanks Danilo)
Changes for v3:
- Don't re-export WPR constants (thanks Alex)
Changes for v2:
- Rebased on Alex's latest version
---
drivers/gpu/nova-core/fb.rs | 1 -
drivers/gpu/nova-core/firmware/gsp.rs | 3 +-
drivers/gpu/nova-core/firmware/riscv.rs | 6 +-
drivers/gpu/nova-core/gsp.rs | 2 +
drivers/gpu/nova-core/gsp/boot.rs | 7 +++
drivers/gpu/nova-core/gsp/fw.rs | 55 ++++++++++++++++++-
.../gpu/nova-core/gsp/fw/r570_144/bindings.rs | 2 +
7 files changed, 69 insertions(+), 7 deletions(-)
diff --git a/drivers/gpu/nova-core/fb.rs b/drivers/gpu/nova-core/fb.rs
index 4d6a1f452183..5580498ba2fb 100644
--- a/drivers/gpu/nova-core/fb.rs
+++ b/drivers/gpu/nova-core/fb.rs
@@ -87,7 +87,6 @@ pub(crate) fn unregister(&self, bar: &Bar0) {
///
/// Contains ranges of GPU memory reserved for a given purpose during the GSP
boot process.
#[derive(Debug)]
-#[expect(dead_code)]
pub(crate) struct FbLayout {
/// Range of the framebuffer. Starts at `0`.
pub(crate) fb: Range<u64>,
diff --git a/drivers/gpu/nova-core/firmware/gsp.rs
b/drivers/gpu/nova-core/firmware/gsp.rs
index 3a1cf0607de7..c9ad912a3150 100644
--- a/drivers/gpu/nova-core/firmware/gsp.rs
+++ b/drivers/gpu/nova-core/firmware/gsp.rs
@@ -131,7 +131,7 @@ pub(crate) struct GspFirmware {
/// Size in bytes of the firmware contained in [`Self::fw`].
pub size: usize,
/// Device-mapped GSP signatures matching the GPU's [`Chipset`].
- signatures: DmaObject,
+ pub signatures: DmaObject,
/// GSP bootloader, verifies the GSP firmware before loading and running
it.
pub bootloader: RiscvFirmware,
}
@@ -216,7 +216,6 @@ pub(crate) fn new<'a, 'b>(
}))
}
- #[expect(unused)]
/// Returns the DMA handle of the radix3 level 0 page table.
pub(crate) fn radix3_dma_handle(&self) -> DmaAddress {
self.level0.dma_handle()
diff --git a/drivers/gpu/nova-core/firmware/riscv.rs
b/drivers/gpu/nova-core/firmware/riscv.rs
index 04f1283abb72..115b5f5355a1 100644
--- a/drivers/gpu/nova-core/firmware/riscv.rs
+++ b/drivers/gpu/nova-core/firmware/riscv.rs
@@ -55,11 +55,11 @@ fn new(bin_fw: &BinFirmware<'_>) ->
Result<Self> {
#[expect(unused)]
pub(crate) struct RiscvFirmware {
/// Offset at which the code starts in the firmware image.
- code_offset: u32,
+ pub(crate) code_offset: u32,
/// Offset at which the data starts in the firmware image.
- data_offset: u32,
+ pub(crate) data_offset: u32,
/// Offset at which the manifest starts in the firmware image.
- manifest_offset: u32,
+ pub(crate) manifest_offset: u32,
/// Application version.
app_version: u32,
/// Device-mapped firmware image.
diff --git a/drivers/gpu/nova-core/gsp.rs b/drivers/gpu/nova-core/gsp.rs
index f1727173bd42..554eb1a34ee7 100644
--- a/drivers/gpu/nova-core/gsp.rs
+++ b/drivers/gpu/nova-core/gsp.rs
@@ -10,6 +10,8 @@
use kernel::prelude::*;
use kernel::transmute::AsBytes;
+use crate::fb::FbLayout;
+
pub(crate) use fw::{GspFwWprMeta, LibosParams};
mod fw;
diff --git a/drivers/gpu/nova-core/gsp/boot.rs
b/drivers/gpu/nova-core/gsp/boot.rs
index fb22508128c4..1d2448331d7a 100644
--- a/drivers/gpu/nova-core/gsp/boot.rs
+++ b/drivers/gpu/nova-core/gsp/boot.rs
@@ -1,6 +1,8 @@
// SPDX-License-Identifier: GPL-2.0
use kernel::device;
+use kernel::dma::CoherentAllocation;
+use kernel::dma_write;
use kernel::pci;
use kernel::prelude::*;
@@ -14,6 +16,7 @@
FIRMWARE_VERSION,
};
use crate::gpu::Chipset;
+use crate::gsp::GspFwWprMeta;
use crate::regs;
use crate::vbios::Vbios;
@@ -132,6 +135,10 @@ pub(crate) fn boot(
bar,
)?;
+ let wpr_meta +
CoherentAllocation::<GspFwWprMeta>::alloc_coherent(dev, 1, GFP_KERNEL |
__GFP_ZERO)?;
+ dma_write!(wpr_meta[0] = GspFwWprMeta::new(&gsp_fw,
&fb_layout))?;
+
Ok(())
}
}
diff --git a/drivers/gpu/nova-core/gsp/fw.rs b/drivers/gpu/nova-core/gsp/fw.rs
index c3bececc29cd..1cc992ca492c 100644
--- a/drivers/gpu/nova-core/gsp/fw.rs
+++ b/drivers/gpu/nova-core/gsp/fw.rs
@@ -10,10 +10,12 @@
use kernel::dma::CoherentAllocation;
use kernel::prelude::*;
use kernel::ptr::{Alignable, Alignment};
-use kernel::sizes::SZ_1M;
+use kernel::sizes::{SZ_128K, SZ_1M};
use kernel::transmute::{AsBytes, FromBytes};
+use crate::firmware::gsp::GspFirmware;
use crate::gpu::Chipset;
+use crate::gsp::FbLayout;
/// Dummy type to group methods related to heap parameters for running the GSP
firmware.
pub(crate) struct GspFwHeapParams(());
@@ -105,6 +107,57 @@ pub(crate) fn wpr_heap_size(&self, chipset: Chipset,
fb_size: u64) -> u64 {
#[repr(transparent)]
pub(crate) struct GspFwWprMeta(bindings::GspFwWprMeta);
+// SAFETY: Padding is explicit and will not contain uninitialized data.
+unsafe impl AsBytes for GspFwWprMeta {}
+
+// SAFETY: This struct only contains integer types for which all bit patterns
+// are valid.
+unsafe impl FromBytes for GspFwWprMeta {}
+
+type GspFwWprMetaBootResumeInfo = r570_144::GspFwWprMeta__bindgen_ty_1;
+type GspFwWprMetaBootInfo = r570_144::GspFwWprMeta__bindgen_ty_1__bindgen_ty_1;
+
+impl GspFwWprMeta {
+ pub(crate) fn new(gsp_firmware: &GspFirmware, fb_layout: &FbLayout)
-> Self {
+ Self(bindings::GspFwWprMeta {
+ magic: r570_144::GSP_FW_WPR_META_MAGIC as u64,
+ revision: u64::from(r570_144::GSP_FW_WPR_META_REVISION),
+ sysmemAddrOfRadix3Elf: gsp_firmware.radix3_dma_handle(),
+ sizeOfRadix3Elf: gsp_firmware.size as u64,
+ sysmemAddrOfBootloader: gsp_firmware.bootloader.ucode.dma_handle(),
+ sizeOfBootloader: gsp_firmware.bootloader.ucode.size() as u64,
+ bootloaderCodeOffset:
u64::from(gsp_firmware.bootloader.code_offset),
+ bootloaderDataOffset:
u64::from(gsp_firmware.bootloader.data_offset),
+ bootloaderManifestOffset:
u64::from(gsp_firmware.bootloader.manifest_offset),
+ __bindgen_anon_1: GspFwWprMetaBootResumeInfo {
+ __bindgen_anon_1: GspFwWprMetaBootInfo {
+ sysmemAddrOfSignature:
gsp_firmware.signatures.dma_handle(),
+ sizeOfSignature: gsp_firmware.signatures.size() as u64,
+ },
+ },
+ gspFwRsvdStart: fb_layout.heap.start,
+ nonWprHeapOffset: fb_layout.heap.start,
+ nonWprHeapSize: fb_layout.heap.end - fb_layout.heap.start,
+ gspFwWprStart: fb_layout.wpr2.start,
+ gspFwHeapOffset: fb_layout.wpr2_heap.start,
+ gspFwHeapSize: fb_layout.wpr2_heap.end - fb_layout.wpr2_heap.start,
+ gspFwOffset: fb_layout.elf.start,
+ bootBinOffset: fb_layout.boot.start,
+ frtsOffset: fb_layout.frts.start,
+ frtsSize: fb_layout.frts.end - fb_layout.frts.start,
+ gspFwWprEnd: fb_layout
+ .vga_workspace
+ .start
+ .align_down(Alignment::new::<SZ_128K>()),
+ gspFwHeapVfPartitionCount: fb_layout.vf_partition_count,
+ fbSize: fb_layout.fb.end - fb_layout.fb.start,
+ vgaWorkspaceOffset: fb_layout.vga_workspace.start,
+ vgaWorkspaceSize: fb_layout.vga_workspace.end -
fb_layout.vga_workspace.start,
+ ..Default::default()
+ })
+ }
+}
+
/// Struct containing the arguments required to pass a memory buffer to the GSP
/// for use during initialisation.
///
diff --git a/drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs
b/drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs
index 6a14cc324391..392b25dc6991 100644
--- a/drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs
+++ b/drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs
@@ -9,6 +9,8 @@
pub const GSP_FW_HEAP_SIZE_OVERRIDE_LIBOS2_MAX_MB: u32 = 256;
pub const GSP_FW_HEAP_SIZE_OVERRIDE_LIBOS3_BAREMETAL_MIN_MB: u32 = 88;
pub const GSP_FW_HEAP_SIZE_OVERRIDE_LIBOS3_BAREMETAL_MAX_MB: u32 = 280;
+pub const GSP_FW_WPR_META_REVISION: u32 = 1;
+pub const GSP_FW_WPR_META_MAGIC: i64 = -2577556379034558285;
pub type __u8 = ffi::c_uchar;
pub type __u16 = ffi::c_ushort;
pub type __u32 = ffi::c_uint;
--
2.50.1
Alistair Popple
2025-Oct-13 06:20 UTC
[PATCH v5 04/14] gpu: nova-core: Add a slice-buffer (sbuffer) datastructure
From: Joel Fernandes <joelagnelf at nvidia.com>
A data structure that can be used to write across multiple slices which
may be out of order in memory. This lets SBuffer user correctly and
safely write out of memory order, without error-prone tracking of
pointers/offsets.
let mut buf1 = [0u8; 3];
let mut buf2 = [0u8; 5];
let mut sbuffer = SBuffer::new([&mut buf1[..], &mut buf2[..]]);
let data = b"hello";
let result = sbuffer.write(data);
An internal conversion of gsp.rs to use this resulted in a nice -ve delta:
gsp.rs: 37 insertions(+), 99 deletions(-)
Co-developed-by: Alistair Popple <apopple at nvidia.com>
Signed-off-by: Alistair Popple <apopple at nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf at nvidia.com>
Reviewed-by: Lyude Paul <lyude at redhat.com>
---
Changes for v5:
- Typos
- s/ETOOSMALL/EINVAL/
- Add documentation
- Fix up examples
Changes for v3:
- Addressed minor review comment from Lyude
---
drivers/gpu/nova-core/nova_core.rs | 1 +
drivers/gpu/nova-core/sbuffer.rs | 218 +++++++++++++++++++++++++++++
2 files changed, 219 insertions(+)
create mode 100644 drivers/gpu/nova-core/sbuffer.rs
diff --git a/drivers/gpu/nova-core/nova_core.rs
b/drivers/gpu/nova-core/nova_core.rs
index fffcaee2249f..a6feeba6254c 100644
--- a/drivers/gpu/nova-core/nova_core.rs
+++ b/drivers/gpu/nova-core/nova_core.rs
@@ -11,6 +11,7 @@
mod gpu;
mod gsp;
mod regs;
+mod sbuffer;
mod util;
mod vbios;
diff --git a/drivers/gpu/nova-core/sbuffer.rs b/drivers/gpu/nova-core/sbuffer.rs
new file mode 100644
index 000000000000..d9c412a68bd8
--- /dev/null
+++ b/drivers/gpu/nova-core/sbuffer.rs
@@ -0,0 +1,218 @@
+// SPDX-License-Identifier: GPL-2.0
+
+use core::ops::Deref;
+
+use kernel::alloc::KVec;
+use kernel::error::code::*;
+use kernel::prelude::*;
+
+/// A buffer abstraction for discontiguous byte slices.
+///
+/// This allows you to treat multiple non-contiguous `&mut [u8]` slices
+/// of the same length as a single stream-like read/write buffer.
+///
+/// # Example:
+///
+/// ```
+/// let mut buf1 = [0u8; 5];
+/// let mut buf2 = [0u8; 5];
+/// let mut sbuffer = SBufferIter::new_writer([&buf1, &buf2]);
+///
+/// let data = b"hello";
+/// let result = sbuffer.write_all(data);
+/// ```
+///
+/// A sliding window of slices to process.
+///
+/// Both read and write buffers are implemented in terms of operating on slices
of a requested
+/// size. This base class implements logic that can be shared between the two
to support that.
+///
+/// `S` is a slice type, `I` is an iterator yielding `S`.
+pub(crate) struct SBufferIter<I: Iterator> {
+ /// `Some` if we are not at the end of the data yet.
+ cur_slice: Option<I::Item>,
+ /// All the slices remaining after `cur_slice`.
+ slices: I,
+}
+
+impl<'a, I> SBufferIter<I>
+where
+ I: Iterator,
+{
+ /// Creates a reader buffer for a discontiguous set of byte slices.
+ ///
+ /// # Example:
+ ///
+ /// ```
+ /// let buf1: [u8; 5] = [0, 1, 2, 3, 4];
+ /// let buf2: [u8; 5] = [5, 6, 7, 8, 9];
+ /// let sbuffer = SBufferIter::new_reader([&buf1[..], &buf2[..]]);
+ /// let sum: u8 = sbuffer.sum();
+ /// assert_eq!(sum, 45);
+ /// ```
+ #[expect(unused)]
+ pub(crate) fn new_reader(slices: impl IntoIterator<IntoIter = I>)
-> Self
+ where
+ I: Iterator<Item = &'a [u8]>,
+ {
+ Self::new(slices)
+ }
+
+ /// Creates a writeable buffer for a discontiguous set of byte slices.
+ ///
+ /// # Example:
+ ///
+ /// ```
+ /// let mut buf1 = [0u8; 5];
+ /// let mut buf2 = [0u8; 5];
+ /// let mut sbuffer = SBufferIter::new_writer([&mut buf1[..], &mut
buf2[..]]);
+ /// sbuffer.write_all(&[0u8, 1, 2, 3, 4, 5, 6, 7, 8, 9][..])?;
+ /// drop(sbuffer);
+ /// assert_eq!(buf1, [0, 1, 2, 3, 4]);
+ /// assert_eq!(buf2, [5, 6, 7, 8, 9]);
+ ///
+ /// ```
+ #[expect(unused)]
+ pub(crate) fn new_writer(slices: impl IntoIterator<IntoIter = I>)
-> Self
+ where
+ I: Iterator<Item = &'a mut [u8]>,
+ {
+ Self::new(slices)
+ }
+
+ fn new(slices: impl IntoIterator<IntoIter = I>) -> Self
+ where
+ I::Item: Deref<Target = [u8]>,
+ {
+ let mut slices = slices.into_iter();
+
+ Self {
+ // Skip empty slices to avoid trouble down the road.
+ cur_slice: slices.find(|s| !s.deref().is_empty()),
+ slices,
+ }
+ }
+
+ fn get_slice_internal(
+ &mut self,
+ len: usize,
+ mut f: impl FnMut(I::Item, usize) -> (I::Item, I::Item),
+ ) -> Option<I::Item>
+ where
+ I::Item: Deref<Target = [u8]>,
+ {
+ match self.cur_slice.take() {
+ None => None,
+ Some(cur_slice) => {
+ if len >= cur_slice.len() {
+ // Caller requested more data than is in the current slice,
return it entirely
+ // and prepare the following slice for being used. Skip
empty slices to avoid
+ // trouble.
+ self.cur_slice = self.slices.find(|s| !s.is_empty());
+
+ Some(cur_slice)
+ } else {
+ // The current slice can satisfy the request, split it and
return a slice of
+ // the requested size.
+ let (ret, next) = f(cur_slice, len);
+ self.cur_slice = Some(next);
+
+ Some(ret)
+ }
+ }
+ }
+ }
+}
+
+/// Provides a way to get non-mutable slices of data to read from.
+impl<'a, I> SBufferIter<I>
+where
+ I: Iterator<Item = &'a [u8]>,
+{
+ /// Returns a slice of at most `len` bytes, or `None` if we are at the end
of the data.
+ ///
+ /// If a slice shorter than `len` bytes has been returned, the caller can
call this method
+ /// again until it returns `None` to try and obtain the remainder of the
data.
+ fn get_slice(&mut self, len: usize) -> Option<&'a
[u8]> {
+ self.get_slice_internal(len, |s, pos| s.split_at(pos))
+ }
+
+ /// Ideally we would implement `Read`, but it is not available in `core`.
+ /// So mimic `std::io::Read::read_exact`.
+ #[expect(unused)]
+ pub(crate) fn read_exact(&mut self, mut dst: &mut [u8]) ->
Result {
+ while !dst.is_empty() {
+ match self.get_slice(dst.len()) {
+ None => return Err(EINVAL),
+ Some(src) => {
+ let dst_slice;
+ (dst_slice, dst) = dst.split_at_mut(src.len());
+ dst_slice.copy_from_slice(src);
+ }
+ }
+ }
+
+ Ok(())
+ }
+
+ /// Read all the remaining data into a `KVec`.
+ ///
+ /// `self` will be empty after this operation.
+ #[expect(unused)]
+ pub(crate) fn read_into_kvec(&mut self, flags: kernel::alloc::Flags)
-> Result<KVec<u8>> {
+ let mut buf = KVec::<u8>::new();
+
+ if let Some(slice) = core::mem::take(&mut self.cur_slice) {
+ buf.extend_from_slice(slice, flags)?;
+ }
+ for slice in &mut self.slices {
+ buf.extend_from_slice(slice, flags)?;
+ }
+
+ Ok(buf)
+ }
+}
+
+/// Provides a way to get mutable slices of data to write into.
+impl<'a, I> SBufferIter<I>
+where
+ I: Iterator<Item = &'a mut [u8]>,
+{
+ /// Returns a mutable slice of at most `len` bytes, or `None` if we are at
the end of the data.
+ ///
+ /// If a slice shorter than `len` bytes has been returned, the caller can
call this method
+ /// again until it returns `None` to try and obtain the remainder of the
data.
+ fn get_slice_mut(&mut self, len: usize) -> Option<&'a mut
[u8]> {
+ self.get_slice_internal(len, |s, pos| s.split_at_mut(pos))
+ }
+
+ /// Ideally we would implement `Write`, but it is not available in `core`.
+ /// So mimic `std::io::Write::write_all`.
+ #[expect(unused)]
+ pub(crate) fn write_all(&mut self, mut src: &[u8]) -> Result {
+ while !src.is_empty() {
+ match self.get_slice_mut(src.len()) {
+ None => return Err(ETOOSMALL),
+ Some(dst) => {
+ let src_slice;
+ (src_slice, src) = src.split_at(dst.len());
+ dst.copy_from_slice(src_slice);
+ }
+ }
+ }
+
+ Ok(())
+ }
+}
+
+impl<'a, I> Iterator for SBufferIter<I>
+where
+ I: Iterator<Item = &'a [u8]>,
+{
+ type Item = u8;
+
+ fn next(&mut self) -> Option<Self::Item> {
+ // Returned slices are guaranteed to not be empty so we can safely
index the first entry.
+ self.get_slice(1).map(|s| s[0])
+ }
+}
--
2.50.1
Alistair Popple
2025-Oct-13 06:20 UTC
[PATCH v5 05/14] gpu: nova-core: Add zeroable trait to bindings
Derive the Zeroable trait for existing bindgen generated bindings. This
is safe because all bindgen generated types are simple integer types for
which any bit pattern, including all zeros, is valid.
Signed-off-by: Alistair Popple <apopple at nvidia.com>
---
Changes for v5:
- New for v5
---
drivers/gpu/nova-core/gsp/fw/r570_144.rs | 1 +
.../gpu/nova-core/gsp/fw/r570_144/bindings.rs | 16 ++++++++--------
2 files changed, 9 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/nova-core/gsp/fw/r570_144.rs
b/drivers/gpu/nova-core/gsp/fw/r570_144.rs
index 82a973cd99c3..4f5c65ac1eb9 100644
--- a/drivers/gpu/nova-core/gsp/fw/r570_144.rs
+++ b/drivers/gpu/nova-core/gsp/fw/r570_144.rs
@@ -25,4 +25,5 @@
unsafe_op_in_unsafe_fn
)]
use kernel::ffi;
+use kernel::prelude::Zeroable;
include!("r570_144/bindings.rs");
diff --git a/drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs
b/drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs
index 392b25dc6991..f7b38978c5f8 100644
--- a/drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs
+++ b/drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs
@@ -20,7 +20,7 @@
pub type u32_ = __u32;
pub type u64_ = __u64;
#[repr(C)]
-#[derive(Copy, Clone)]
+#[derive(Copy, Clone, Zeroable)]
pub struct GspFwWprMeta {
pub magic: u64_,
pub revision: u64_,
@@ -55,19 +55,19 @@ pub struct GspFwWprMeta {
pub verified: u64_,
}
#[repr(C)]
-#[derive(Copy, Clone)]
+#[derive(Copy, Clone, Zeroable)]
pub union GspFwWprMeta__bindgen_ty_1 {
pub __bindgen_anon_1: GspFwWprMeta__bindgen_ty_1__bindgen_ty_1,
pub __bindgen_anon_2: GspFwWprMeta__bindgen_ty_1__bindgen_ty_2,
}
#[repr(C)]
-#[derive(Debug, Default, Copy, Clone)]
+#[derive(Debug, Default, Copy, Clone, Zeroable)]
pub struct GspFwWprMeta__bindgen_ty_1__bindgen_ty_1 {
pub sysmemAddrOfSignature: u64_,
pub sizeOfSignature: u64_,
}
#[repr(C)]
-#[derive(Debug, Default, Copy, Clone)]
+#[derive(Debug, Default, Copy, Clone, Zeroable)]
pub struct GspFwWprMeta__bindgen_ty_1__bindgen_ty_2 {
pub gspFwHeapFreeListWprOffset: u32_,
pub unused0: u32_,
@@ -83,13 +83,13 @@ fn default() -> Self {
}
}
#[repr(C)]
-#[derive(Copy, Clone)]
+#[derive(Copy, Clone, Zeroable)]
pub union GspFwWprMeta__bindgen_ty_2 {
pub __bindgen_anon_1: GspFwWprMeta__bindgen_ty_2__bindgen_ty_1,
pub __bindgen_anon_2: GspFwWprMeta__bindgen_ty_2__bindgen_ty_2,
}
#[repr(C)]
-#[derive(Debug, Default, Copy, Clone)]
+#[derive(Debug, Default, Copy, Clone, Zeroable)]
pub struct GspFwWprMeta__bindgen_ty_2__bindgen_ty_1 {
pub partitionRpcAddr: u64_,
pub partitionRpcRequestOffset: u16_,
@@ -101,7 +101,7 @@ pub struct GspFwWprMeta__bindgen_ty_2__bindgen_ty_1 {
pub lsUcodeVersion: u32_,
}
#[repr(C)]
-#[derive(Debug, Default, Copy, Clone)]
+#[derive(Debug, Default, Copy, Clone, Zeroable)]
pub struct GspFwWprMeta__bindgen_ty_2__bindgen_ty_2 {
pub partitionRpcPadding: [u32_; 4usize],
pub sysmemAddrOfCrashReportQueue: u64_,
@@ -136,7 +136,7 @@ fn default() -> Self {
pub const LibosMemoryRegionLoc_LIBOS_MEMORY_REGION_LOC_FB: LibosMemoryRegionLoc
= 2;
pub type LibosMemoryRegionLoc = ffi::c_uint;
#[repr(C)]
-#[derive(Debug, Default, Copy, Clone)]
+#[derive(Debug, Default, Copy, Clone, Zeroable)]
pub struct LibosMemoryRegionInitArgument {
pub id8: LibosAddress,
pub pa: LibosAddress,
--
2.50.1
Alistair Popple
2025-Oct-13 06:20 UTC
[PATCH v5 06/14] gpu: nova-core: Add GSP command queue bindings
Add bindings and accessors used for the GSP command queue.
Signed-off-by: Alistair Popple <apopple at nvidia.com>
---
Changes for v5:
- Derive the Zeroable trait for structs and enums
Changes for v4:
- Don't panic the kernel if trying to initialise a large (> 4GB)
message header.
- Use `init!` to provide safe and complete initialisers.
- Take an enum type instead of a u32 for the function.
Changes for v3:
- New for v3
---
drivers/gpu/nova-core/gsp/fw.rs | 275 +++++++++
.../gpu/nova-core/gsp/fw/r570_144/bindings.rs | 541 ++++++++++++++++++
2 files changed, 816 insertions(+)
diff --git a/drivers/gpu/nova-core/gsp/fw.rs b/drivers/gpu/nova-core/gsp/fw.rs
index 1cc992ca492c..a2ce570ddfaf 100644
--- a/drivers/gpu/nova-core/gsp/fw.rs
+++ b/drivers/gpu/nova-core/gsp/fw.rs
@@ -5,6 +5,7 @@
// Alias to avoid repeating the version number with every use.
use r570_144 as bindings;
+use core::fmt;
use core::ops::Range;
use kernel::dma::CoherentAllocation;
@@ -16,6 +17,7 @@
use crate::firmware::gsp::GspFirmware;
use crate::gpu::Chipset;
use crate::gsp::FbLayout;
+use crate::gsp::GSP_PAGE_SIZE;
/// Dummy type to group methods related to heap parameters for running the GSP
firmware.
pub(crate) struct GspFwHeapParams(());
@@ -158,6 +160,120 @@ pub(crate) fn new(gsp_firmware: &GspFirmware,
fb_layout: &FbLayout) -> Self {
}
}
+#[derive(PartialEq)]
+pub(crate) enum MsgFunction {
+ // Common function codes
+ Nop = bindings::NV_VGPU_MSG_FUNCTION_NOP as isize,
+ SetGuestSystemInfo = bindings::NV_VGPU_MSG_FUNCTION_SET_GUEST_SYSTEM_INFO
as isize,
+ AllocRoot = bindings::NV_VGPU_MSG_FUNCTION_ALLOC_ROOT as isize,
+ AllocDevice = bindings::NV_VGPU_MSG_FUNCTION_ALLOC_DEVICE as isize,
+ AllocMemory = bindings::NV_VGPU_MSG_FUNCTION_ALLOC_MEMORY as isize,
+ AllocCtxDma = bindings::NV_VGPU_MSG_FUNCTION_ALLOC_CTX_DMA as isize,
+ AllocChannelDma = bindings::NV_VGPU_MSG_FUNCTION_ALLOC_CHANNEL_DMA as
isize,
+ MapMemory = bindings::NV_VGPU_MSG_FUNCTION_MAP_MEMORY as isize,
+ BindCtxDma = bindings::NV_VGPU_MSG_FUNCTION_BIND_CTX_DMA as isize,
+ AllocObject = bindings::NV_VGPU_MSG_FUNCTION_ALLOC_OBJECT as isize,
+ Free = bindings::NV_VGPU_MSG_FUNCTION_FREE as isize,
+ Log = bindings::NV_VGPU_MSG_FUNCTION_LOG as isize,
+ GetGspStaticInfo = bindings::NV_VGPU_MSG_FUNCTION_GET_GSP_STATIC_INFO as
isize,
+ SetRegistry = bindings::NV_VGPU_MSG_FUNCTION_SET_REGISTRY as isize,
+ GspSetSystemInfo = bindings::NV_VGPU_MSG_FUNCTION_GSP_SET_SYSTEM_INFO as
isize,
+ GspInitPostObjGpu = bindings::NV_VGPU_MSG_FUNCTION_GSP_INIT_POST_OBJGPU as
isize,
+ GspRmControl = bindings::NV_VGPU_MSG_FUNCTION_GSP_RM_CONTROL as isize,
+ GetStaticInfo = bindings::NV_VGPU_MSG_FUNCTION_GET_STATIC_INFO as isize,
+
+ // Event codes
+ GspInitDone = bindings::NV_VGPU_MSG_EVENT_GSP_INIT_DONE as isize,
+ GspRunCpuSequencer = bindings::NV_VGPU_MSG_EVENT_GSP_RUN_CPU_SEQUENCER as
isize,
+ PostEvent = bindings::NV_VGPU_MSG_EVENT_POST_EVENT as isize,
+ RcTriggered = bindings::NV_VGPU_MSG_EVENT_RC_TRIGGERED as isize,
+ MmuFaultQueued = bindings::NV_VGPU_MSG_EVENT_MMU_FAULT_QUEUED as isize,
+ OsErrorLog = bindings::NV_VGPU_MSG_EVENT_OS_ERROR_LOG as isize,
+ GspPostNoCat = bindings::NV_VGPU_MSG_EVENT_GSP_POST_NOCAT_RECORD as isize,
+ GspLockdownNotice = bindings::NV_VGPU_MSG_EVENT_GSP_LOCKDOWN_NOTICE as
isize,
+ UcodeLibOsPrint = bindings::NV_VGPU_MSG_EVENT_UCODE_LIBOS_PRINT as isize,
+}
+
+impl fmt::Display for MsgFunction {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) ->
fmt::Result {
+ match self {
+ // Common function codes
+ MsgFunction::Nop => write!(f, "NOP"),
+ MsgFunction::SetGuestSystemInfo => write!(f,
"SET_GUEST_SYSTEM_INFO"),
+ MsgFunction::AllocRoot => write!(f, "ALLOC_ROOT"),
+ MsgFunction::AllocDevice => write!(f, "ALLOC_DEVICE"),
+ MsgFunction::AllocMemory => write!(f, "ALLOC_MEMORY"),
+ MsgFunction::AllocCtxDma => write!(f,
"ALLOC_CTX_DMA"),
+ MsgFunction::AllocChannelDma => write!(f,
"ALLOC_CHANNEL_DMA"),
+ MsgFunction::MapMemory => write!(f, "MAP_MEMORY"),
+ MsgFunction::BindCtxDma => write!(f, "BIND_CTX_DMA"),
+ MsgFunction::AllocObject => write!(f, "ALLOC_OBJECT"),
+ MsgFunction::Free => write!(f, "FREE"),
+ MsgFunction::Log => write!(f, "LOG"),
+ MsgFunction::GetGspStaticInfo => write!(f,
"GET_GSP_STATIC_INFO"),
+ MsgFunction::SetRegistry => write!(f, "SET_REGISTRY"),
+ MsgFunction::GspSetSystemInfo => write!(f,
"GSP_SET_SYSTEM_INFO"),
+ MsgFunction::GspInitPostObjGpu => write!(f,
"GSP_INIT_POST_OBJGPU"),
+ MsgFunction::GspRmControl => write!(f,
"GSP_RM_CONTROL"),
+ MsgFunction::GetStaticInfo => write!(f,
"GET_STATIC_INFO"),
+
+ // Event codes
+ MsgFunction::GspInitDone => write!(f, "INIT_DONE"),
+ MsgFunction::GspRunCpuSequencer => write!(f,
"RUN_CPU_SEQUENCER"),
+ MsgFunction::PostEvent => write!(f, "POST_EVENT"),
+ MsgFunction::RcTriggered => write!(f, "RC_TRIGGERED"),
+ MsgFunction::MmuFaultQueued => write!(f,
"MMU_FAULT_QUEUED"),
+ MsgFunction::OsErrorLog => write!(f, "OS_ERROR_LOG"),
+ MsgFunction::GspPostNoCat => write!(f, "NOCAT"),
+ MsgFunction::GspLockdownNotice => write!(f,
"LOCKDOWN_NOTICE"),
+ MsgFunction::UcodeLibOsPrint => write!(f,
"LIBOS_PRINT"),
+ }
+ }
+}
+
+impl TryFrom<u32> for MsgFunction {
+ type Error = kernel::error::Error;
+
+ fn try_from(value: u32) -> Result<MsgFunction> {
+ match value {
+ bindings::NV_VGPU_MSG_FUNCTION_NOP => Ok(MsgFunction::Nop),
+ bindings::NV_VGPU_MSG_FUNCTION_SET_GUEST_SYSTEM_INFO => {
+ Ok(MsgFunction::SetGuestSystemInfo)
+ }
+ bindings::NV_VGPU_MSG_FUNCTION_ALLOC_ROOT =>
Ok(MsgFunction::AllocRoot),
+ bindings::NV_VGPU_MSG_FUNCTION_ALLOC_DEVICE =>
Ok(MsgFunction::AllocDevice),
+ bindings::NV_VGPU_MSG_FUNCTION_ALLOC_MEMORY =>
Ok(MsgFunction::AllocMemory),
+ bindings::NV_VGPU_MSG_FUNCTION_ALLOC_CTX_DMA =>
Ok(MsgFunction::AllocCtxDma),
+ bindings::NV_VGPU_MSG_FUNCTION_ALLOC_CHANNEL_DMA =>
Ok(MsgFunction::AllocChannelDma),
+ bindings::NV_VGPU_MSG_FUNCTION_MAP_MEMORY =>
Ok(MsgFunction::MapMemory),
+ bindings::NV_VGPU_MSG_FUNCTION_BIND_CTX_DMA =>
Ok(MsgFunction::BindCtxDma),
+ bindings::NV_VGPU_MSG_FUNCTION_ALLOC_OBJECT =>
Ok(MsgFunction::AllocObject),
+ bindings::NV_VGPU_MSG_FUNCTION_FREE => Ok(MsgFunction::Free),
+ bindings::NV_VGPU_MSG_FUNCTION_LOG => Ok(MsgFunction::Log),
+ bindings::NV_VGPU_MSG_FUNCTION_GET_GSP_STATIC_INFO =>
Ok(MsgFunction::GetGspStaticInfo),
+ bindings::NV_VGPU_MSG_FUNCTION_SET_REGISTRY =>
Ok(MsgFunction::SetRegistry),
+ bindings::NV_VGPU_MSG_FUNCTION_GSP_SET_SYSTEM_INFO =>
Ok(MsgFunction::GspSetSystemInfo),
+ bindings::NV_VGPU_MSG_FUNCTION_GSP_INIT_POST_OBJGPU => {
+ Ok(MsgFunction::GspInitPostObjGpu)
+ }
+ bindings::NV_VGPU_MSG_FUNCTION_GSP_RM_CONTROL =>
Ok(MsgFunction::GspRmControl),
+ bindings::NV_VGPU_MSG_FUNCTION_GET_STATIC_INFO =>
Ok(MsgFunction::GetStaticInfo),
+ bindings::NV_VGPU_MSG_EVENT_GSP_INIT_DONE =>
Ok(MsgFunction::GspInitDone),
+ bindings::NV_VGPU_MSG_EVENT_GSP_RUN_CPU_SEQUENCER => {
+ Ok(MsgFunction::GspRunCpuSequencer)
+ }
+ bindings::NV_VGPU_MSG_EVENT_POST_EVENT =>
Ok(MsgFunction::PostEvent),
+ bindings::NV_VGPU_MSG_EVENT_RC_TRIGGERED =>
Ok(MsgFunction::RcTriggered),
+ bindings::NV_VGPU_MSG_EVENT_MMU_FAULT_QUEUED =>
Ok(MsgFunction::MmuFaultQueued),
+ bindings::NV_VGPU_MSG_EVENT_OS_ERROR_LOG =>
Ok(MsgFunction::OsErrorLog),
+ bindings::NV_VGPU_MSG_EVENT_GSP_POST_NOCAT_RECORD =>
Ok(MsgFunction::GspPostNoCat),
+ bindings::NV_VGPU_MSG_EVENT_GSP_LOCKDOWN_NOTICE =>
Ok(MsgFunction::GspLockdownNotice),
+ bindings::NV_VGPU_MSG_EVENT_UCODE_LIBOS_PRINT =>
Ok(MsgFunction::UcodeLibOsPrint),
+ _ => Err(EINVAL),
+ }
+ }
+}
+
/// Struct containing the arguments required to pass a memory buffer to the GSP
/// for use during initialisation.
///
@@ -208,3 +324,162 @@ fn id8(name: &str) -> u64 {
}))
}
}
+
+#[repr(transparent)]
+pub(crate) struct MsgqTxHeader(bindings::msgqTxHeader);
+
+impl MsgqTxHeader {
+ pub(crate) fn new(msgq_size: u32, rx_hdr_offset: u32, msg_count: u32) ->
Self {
+ Self(bindings::msgqTxHeader {
+ version: 0,
+ size: msgq_size,
+ msgSize: GSP_PAGE_SIZE as u32,
+ msgCount: msg_count,
+ writePtr: 0,
+ flags: 1,
+ rxHdrOff: rx_hdr_offset,
+ entryOff: GSP_PAGE_SIZE as u32,
+ })
+ }
+
+ pub(crate) fn write_ptr(&self) -> u32 {
+ let ptr = (&self.0.writePtr) as *const u32;
+
+ // SAFETY: This is part of a CoherentAllocation and implements the
+ // equivalent as what the dma_read! macro would and is therefore safe
+ // for the same reasons.
+ unsafe { ptr.read_volatile() }
+ }
+
+ pub(crate) fn set_write_ptr(&mut self, val: u32) {
+ let ptr = (&mut self.0.writePtr) as *mut u32;
+
+ // SAFETY: This is part of a CoherentAllocation and implements the
+ // equivalent as what the dma_write! macro would and is therefore safe
+ // for the same reasons.
+ unsafe { ptr.write_volatile(val) }
+ }
+}
+
+// SAFETY: Padding is explicit and will not contain uninitialized data.
+unsafe impl AsBytes for MsgqTxHeader {}
+
+/// RX header for setting up a message queue with the GSP.
+#[repr(transparent)]
+pub(crate) struct MsgqRxHeader(bindings::msgqRxHeader);
+
+impl MsgqRxHeader {
+ pub(crate) fn new() -> Self {
+ Self(Default::default())
+ }
+
+ pub(crate) fn read_ptr(&self) -> u32 {
+ let ptr = (&self.0.readPtr) as *const u32;
+
+ // SAFETY: This is part of a CoherentAllocation and implements the
+ // equivalent as what the dma_read! macro would and is therefore safe
+ // for the same reasons.
+ unsafe { ptr.read_volatile() }
+ }
+
+ pub(crate) fn set_read_ptr(&mut self, val: u32) {
+ let ptr = (&mut self.0.readPtr) as *mut u32;
+
+ // SAFETY: This is part of a CoherentAllocation and implements the
+ // equivalent as what the dma_write! macro would and is therefore safe
+ // for the same reasons.
+ unsafe { ptr.write_volatile(val) }
+ }
+}
+
+// SAFETY: Padding is explicit and will not contain uninitialized data.
+unsafe impl AsBytes for MsgqRxHeader {}
+
+impl bindings::rpc_message_header_v {
+ pub(crate) fn init(cmd_size: u32, function: MsgFunction) -> impl
Init<Self, Error> {
+ type RpcMessageHeader = bindings::rpc_message_header_v;
+ try_init!(RpcMessageHeader {
+ // TODO: magic number
+ header_version: 0x03000000,
+ signature: bindings::NV_VGPU_MSG_SIGNATURE_VALID,
+ function: function as u32,
+ length: (size_of::<Self>() as u32)
+ .checked_add(cmd_size)
+ .ok_or(EOVERFLOW)?,
+ rpc_result: 0xffffffff,
+ rpc_result_private: 0xffffffff,
+ ..Zeroable::init_zeroed()
+ })
+ }
+}
+
+// SAFETY: We can't derive the Zeroable trait for this binding because the
+// procedural macro doesn't support the syntax used by bindgen to create
the
+// __IncompleteArrayField types. So instead we implement it here, which is safe
+// because these are explicitly padded structures only containing types for
+// which any bit pattern, including all zeros, is valid.
+unsafe impl Zeroable for bindings::rpc_message_header_v {}
+
+#[repr(transparent)]
+pub(crate) struct GspMsgElement {
+ inner: bindings::GSP_MSG_QUEUE_ELEMENT,
+}
+
+impl GspMsgElement {
+ #[allow(non_snake_case)]
+ pub(crate) fn init(
+ sequence: u32,
+ cmd_size: usize,
+ function: MsgFunction,
+ ) -> impl Init<Self, Error> {
+ type RpcMessageHeader = bindings::rpc_message_header_v;
+ type InnerGspMsgElement = bindings::GSP_MSG_QUEUE_ELEMENT;
+ let init_inner = try_init!(InnerGspMsgElement {
+ seqNum: sequence,
+ elemCount: size_of::<Self>()
+ .checked_add(cmd_size)
+ .ok_or(EOVERFLOW)?
+ .div_ceil(GSP_PAGE_SIZE) as u32,
+ rpc <- RpcMessageHeader::init(cmd_size as u32, function),
+ ..Zeroable::init_zeroed()
+ });
+
+ try_init!(GspMsgElement {
+ inner <- init_inner,
+ })
+ }
+
+ pub(crate) fn set_checksum(&mut self, checksum: u32) {
+ self.inner.checkSum = checksum;
+ }
+
+ // Return the total length of the message, noting that rpc.length includes
+ // the length of the GspRpcHeader but not the message header.
+ pub(crate) fn length(&self) -> u32 {
+ size_of::<Self>() as u32 -
size_of::<bindings::rpc_message_header_v>() as u32
+ + self.inner.rpc.length
+ }
+
+ pub(crate) fn sequence(&self) -> u32 {
+ self.inner.rpc.sequence
+ }
+
+ pub(crate) fn function_number(&self) -> u32 {
+ self.inner.rpc.function
+ }
+
+ pub(crate) fn function(&self) -> Result<MsgFunction> {
+ self.inner.rpc.function.try_into()
+ }
+
+ pub(crate) fn element_count(&self) -> u32 {
+ self.inner.elemCount
+ }
+}
+
+// SAFETY: Padding is explicit and will not contain uninitialized data.
+unsafe impl AsBytes for GspMsgElement {}
+
+// SAFETY: This struct only contains integer types for which all bit patterns
+// are valid.
+unsafe impl FromBytes for GspMsgElement {}
diff --git a/drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs
b/drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs
index f7b38978c5f8..1251b0c313ce 100644
--- a/drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs
+++ b/drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs
@@ -1,5 +1,36 @@
// SPDX-License-Identifier: GPL-2.0
+#[repr(C)]
+#[derive(Default)]
+pub struct
__IncompleteArrayField<T>(::core::marker::PhantomData<T>, [T; 0]);
+impl<T> __IncompleteArrayField<T> {
+ #[inline]
+ pub const fn new() -> Self {
+ __IncompleteArrayField(::core::marker::PhantomData, [])
+ }
+ #[inline]
+ pub fn as_ptr(&self) -> *const T {
+ self as *const _ as *const T
+ }
+ #[inline]
+ pub fn as_mut_ptr(&mut self) -> *mut T {
+ self as *mut _ as *mut T
+ }
+ #[inline]
+ pub unsafe fn as_slice(&self, len: usize) -> &[T] {
+ ::core::slice::from_raw_parts(self.as_ptr(), len)
+ }
+ #[inline]
+ pub unsafe fn as_mut_slice(&mut self, len: usize) -> &mut [T] {
+ ::core::slice::from_raw_parts_mut(self.as_mut_ptr(), len)
+ }
+}
+impl<T> ::core::fmt::Debug for __IncompleteArrayField<T> {
+ fn fmt(&self, fmt: &mut ::core::fmt::Formatter<'_>) ->
::core::fmt::Result {
+ fmt.write_str("__IncompleteArrayField")
+ }
+}
+pub const NV_VGPU_MSG_SIGNATURE_VALID: u32 = 1129337430;
pub const GSP_FW_HEAP_PARAM_OS_SIZE_LIBOS2: u32 = 0;
pub const GSP_FW_HEAP_PARAM_OS_SIZE_LIBOS3_BAREMETAL: u32 = 23068672;
pub const GSP_FW_HEAP_PARAM_BASE_RM_SIZE_TU10X: u32 = 8388608;
@@ -11,6 +42,7 @@
pub const GSP_FW_HEAP_SIZE_OVERRIDE_LIBOS3_BAREMETAL_MAX_MB: u32 = 280;
pub const GSP_FW_WPR_META_REVISION: u32 = 1;
pub const GSP_FW_WPR_META_MAGIC: i64 = -2577556379034558285;
+pub const REGISTRY_TABLE_ENTRY_TYPE_DWORD: u32 = 1;
pub type __u8 = ffi::c_uchar;
pub type __u16 = ffi::c_ushort;
pub type __u32 = ffi::c_uint;
@@ -19,6 +51,477 @@
pub type u16_ = __u16;
pub type u32_ = __u32;
pub type u64_ = __u64;
+pub const NV_VGPU_MSG_FUNCTION_NOP: _bindgen_ty_2 = 0;
+pub const NV_VGPU_MSG_FUNCTION_SET_GUEST_SYSTEM_INFO: _bindgen_ty_2 = 1;
+pub const NV_VGPU_MSG_FUNCTION_ALLOC_ROOT: _bindgen_ty_2 = 2;
+pub const NV_VGPU_MSG_FUNCTION_ALLOC_DEVICE: _bindgen_ty_2 = 3;
+pub const NV_VGPU_MSG_FUNCTION_ALLOC_MEMORY: _bindgen_ty_2 = 4;
+pub const NV_VGPU_MSG_FUNCTION_ALLOC_CTX_DMA: _bindgen_ty_2 = 5;
+pub const NV_VGPU_MSG_FUNCTION_ALLOC_CHANNEL_DMA: _bindgen_ty_2 = 6;
+pub const NV_VGPU_MSG_FUNCTION_MAP_MEMORY: _bindgen_ty_2 = 7;
+pub const NV_VGPU_MSG_FUNCTION_BIND_CTX_DMA: _bindgen_ty_2 = 8;
+pub const NV_VGPU_MSG_FUNCTION_ALLOC_OBJECT: _bindgen_ty_2 = 9;
+pub const NV_VGPU_MSG_FUNCTION_FREE: _bindgen_ty_2 = 10;
+pub const NV_VGPU_MSG_FUNCTION_LOG: _bindgen_ty_2 = 11;
+pub const NV_VGPU_MSG_FUNCTION_ALLOC_VIDMEM: _bindgen_ty_2 = 12;
+pub const NV_VGPU_MSG_FUNCTION_UNMAP_MEMORY: _bindgen_ty_2 = 13;
+pub const NV_VGPU_MSG_FUNCTION_MAP_MEMORY_DMA: _bindgen_ty_2 = 14;
+pub const NV_VGPU_MSG_FUNCTION_UNMAP_MEMORY_DMA: _bindgen_ty_2 = 15;
+pub const NV_VGPU_MSG_FUNCTION_GET_EDID: _bindgen_ty_2 = 16;
+pub const NV_VGPU_MSG_FUNCTION_ALLOC_DISP_CHANNEL: _bindgen_ty_2 = 17;
+pub const NV_VGPU_MSG_FUNCTION_ALLOC_DISP_OBJECT: _bindgen_ty_2 = 18;
+pub const NV_VGPU_MSG_FUNCTION_ALLOC_SUBDEVICE: _bindgen_ty_2 = 19;
+pub const NV_VGPU_MSG_FUNCTION_ALLOC_DYNAMIC_MEMORY: _bindgen_ty_2 = 20;
+pub const NV_VGPU_MSG_FUNCTION_DUP_OBJECT: _bindgen_ty_2 = 21;
+pub const NV_VGPU_MSG_FUNCTION_IDLE_CHANNELS: _bindgen_ty_2 = 22;
+pub const NV_VGPU_MSG_FUNCTION_ALLOC_EVENT: _bindgen_ty_2 = 23;
+pub const NV_VGPU_MSG_FUNCTION_SEND_EVENT: _bindgen_ty_2 = 24;
+pub const NV_VGPU_MSG_FUNCTION_REMAPPER_CONTROL: _bindgen_ty_2 = 25;
+pub const NV_VGPU_MSG_FUNCTION_DMA_CONTROL: _bindgen_ty_2 = 26;
+pub const NV_VGPU_MSG_FUNCTION_DMA_FILL_PTE_MEM: _bindgen_ty_2 = 27;
+pub const NV_VGPU_MSG_FUNCTION_MANAGE_HW_RESOURCE: _bindgen_ty_2 = 28;
+pub const NV_VGPU_MSG_FUNCTION_BIND_ARBITRARY_CTX_DMA: _bindgen_ty_2 = 29;
+pub const NV_VGPU_MSG_FUNCTION_CREATE_FB_SEGMENT: _bindgen_ty_2 = 30;
+pub const NV_VGPU_MSG_FUNCTION_DESTROY_FB_SEGMENT: _bindgen_ty_2 = 31;
+pub const NV_VGPU_MSG_FUNCTION_ALLOC_SHARE_DEVICE: _bindgen_ty_2 = 32;
+pub const NV_VGPU_MSG_FUNCTION_DEFERRED_API_CONTROL: _bindgen_ty_2 = 33;
+pub const NV_VGPU_MSG_FUNCTION_REMOVE_DEFERRED_API: _bindgen_ty_2 = 34;
+pub const NV_VGPU_MSG_FUNCTION_SIM_ESCAPE_READ: _bindgen_ty_2 = 35;
+pub const NV_VGPU_MSG_FUNCTION_SIM_ESCAPE_WRITE: _bindgen_ty_2 = 36;
+pub const NV_VGPU_MSG_FUNCTION_SIM_MANAGE_DISPLAY_CONTEXT_DMA: _bindgen_ty_2 =
37;
+pub const NV_VGPU_MSG_FUNCTION_FREE_VIDMEM_VIRT: _bindgen_ty_2 = 38;
+pub const NV_VGPU_MSG_FUNCTION_PERF_GET_PSTATE_INFO: _bindgen_ty_2 = 39;
+pub const NV_VGPU_MSG_FUNCTION_PERF_GET_PERFMON_SAMPLE: _bindgen_ty_2 = 40;
+pub const NV_VGPU_MSG_FUNCTION_PERF_GET_VIRTUAL_PSTATE_INFO: _bindgen_ty_2 =
41;
+pub const NV_VGPU_MSG_FUNCTION_PERF_GET_LEVEL_INFO: _bindgen_ty_2 = 42;
+pub const NV_VGPU_MSG_FUNCTION_MAP_SEMA_MEMORY: _bindgen_ty_2 = 43;
+pub const NV_VGPU_MSG_FUNCTION_UNMAP_SEMA_MEMORY: _bindgen_ty_2 = 44;
+pub const NV_VGPU_MSG_FUNCTION_SET_SURFACE_PROPERTIES: _bindgen_ty_2 = 45;
+pub const NV_VGPU_MSG_FUNCTION_CLEANUP_SURFACE: _bindgen_ty_2 = 46;
+pub const NV_VGPU_MSG_FUNCTION_UNLOADING_GUEST_DRIVER: _bindgen_ty_2 = 47;
+pub const NV_VGPU_MSG_FUNCTION_TDR_SET_TIMEOUT_STATE: _bindgen_ty_2 = 48;
+pub const NV_VGPU_MSG_FUNCTION_SWITCH_TO_VGA: _bindgen_ty_2 = 49;
+pub const NV_VGPU_MSG_FUNCTION_GPU_EXEC_REG_OPS: _bindgen_ty_2 = 50;
+pub const NV_VGPU_MSG_FUNCTION_GET_STATIC_INFO: _bindgen_ty_2 = 51;
+pub const NV_VGPU_MSG_FUNCTION_ALLOC_VIRTMEM: _bindgen_ty_2 = 52;
+pub const NV_VGPU_MSG_FUNCTION_UPDATE_PDE_2: _bindgen_ty_2 = 53;
+pub const NV_VGPU_MSG_FUNCTION_SET_PAGE_DIRECTORY: _bindgen_ty_2 = 54;
+pub const NV_VGPU_MSG_FUNCTION_GET_STATIC_PSTATE_INFO: _bindgen_ty_2 = 55;
+pub const NV_VGPU_MSG_FUNCTION_TRANSLATE_GUEST_GPU_PTES: _bindgen_ty_2 = 56;
+pub const NV_VGPU_MSG_FUNCTION_RESERVED_57: _bindgen_ty_2 = 57;
+pub const NV_VGPU_MSG_FUNCTION_RESET_CURRENT_GR_CONTEXT: _bindgen_ty_2 = 58;
+pub const NV_VGPU_MSG_FUNCTION_SET_SEMA_MEM_VALIDATION_STATE: _bindgen_ty_2 =
59;
+pub const NV_VGPU_MSG_FUNCTION_GET_ENGINE_UTILIZATION: _bindgen_ty_2 = 60;
+pub const NV_VGPU_MSG_FUNCTION_UPDATE_GPU_PDES: _bindgen_ty_2 = 61;
+pub const NV_VGPU_MSG_FUNCTION_GET_ENCODER_CAPACITY: _bindgen_ty_2 = 62;
+pub const NV_VGPU_MSG_FUNCTION_VGPU_PF_REG_READ32: _bindgen_ty_2 = 63;
+pub const NV_VGPU_MSG_FUNCTION_SET_GUEST_SYSTEM_INFO_EXT: _bindgen_ty_2 = 64;
+pub const NV_VGPU_MSG_FUNCTION_GET_GSP_STATIC_INFO: _bindgen_ty_2 = 65;
+pub const NV_VGPU_MSG_FUNCTION_RMFS_INIT: _bindgen_ty_2 = 66;
+pub const NV_VGPU_MSG_FUNCTION_RMFS_CLOSE_QUEUE: _bindgen_ty_2 = 67;
+pub const NV_VGPU_MSG_FUNCTION_RMFS_CLEANUP: _bindgen_ty_2 = 68;
+pub const NV_VGPU_MSG_FUNCTION_RMFS_TEST: _bindgen_ty_2 = 69;
+pub const NV_VGPU_MSG_FUNCTION_UPDATE_BAR_PDE: _bindgen_ty_2 = 70;
+pub const NV_VGPU_MSG_FUNCTION_CONTINUATION_RECORD: _bindgen_ty_2 = 71;
+pub const NV_VGPU_MSG_FUNCTION_GSP_SET_SYSTEM_INFO: _bindgen_ty_2 = 72;
+pub const NV_VGPU_MSG_FUNCTION_SET_REGISTRY: _bindgen_ty_2 = 73;
+pub const NV_VGPU_MSG_FUNCTION_GSP_INIT_POST_OBJGPU: _bindgen_ty_2 = 74;
+pub const NV_VGPU_MSG_FUNCTION_SUBDEV_EVENT_SET_NOTIFICATION: _bindgen_ty_2 =
75;
+pub const NV_VGPU_MSG_FUNCTION_GSP_RM_CONTROL: _bindgen_ty_2 = 76;
+pub const NV_VGPU_MSG_FUNCTION_GET_STATIC_INFO2: _bindgen_ty_2 = 77;
+pub const NV_VGPU_MSG_FUNCTION_DUMP_PROTOBUF_COMPONENT: _bindgen_ty_2 = 78;
+pub const NV_VGPU_MSG_FUNCTION_UNSET_PAGE_DIRECTORY: _bindgen_ty_2 = 79;
+pub const NV_VGPU_MSG_FUNCTION_GET_CONSOLIDATED_STATIC_INFO: _bindgen_ty_2 =
80;
+pub const NV_VGPU_MSG_FUNCTION_GMMU_REGISTER_FAULT_BUFFER: _bindgen_ty_2 = 81;
+pub const NV_VGPU_MSG_FUNCTION_GMMU_UNREGISTER_FAULT_BUFFER: _bindgen_ty_2 =
82;
+pub const NV_VGPU_MSG_FUNCTION_GMMU_REGISTER_CLIENT_SHADOW_FAULT_BUFFER:
_bindgen_ty_2 = 83;
+pub const NV_VGPU_MSG_FUNCTION_GMMU_UNREGISTER_CLIENT_SHADOW_FAULT_BUFFER:
_bindgen_ty_2 = 84;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_SET_VGPU_FB_USAGE: _bindgen_ty_2 = 85;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_NVFBC_SW_SESSION_UPDATE_INFO: _bindgen_ty_2
= 86;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_NVENC_SW_SESSION_UPDATE_INFO: _bindgen_ty_2
= 87;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_RESET_CHANNEL: _bindgen_ty_2 = 88;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_RESET_ISOLATED_CHANNEL: _bindgen_ty_2 = 89;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_GPU_HANDLE_VF_PRI_FAULT: _bindgen_ty_2 =
90;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_CLK_GET_EXTENDED_INFO: _bindgen_ty_2 = 91;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_PERF_BOOST: _bindgen_ty_2 = 92;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_PERF_VPSTATES_GET_CONTROL: _bindgen_ty_2 =
93;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_GET_ZBC_CLEAR_TABLE: _bindgen_ty_2 = 94;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_SET_ZBC_COLOR_CLEAR: _bindgen_ty_2 = 95;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_SET_ZBC_DEPTH_CLEAR: _bindgen_ty_2 = 96;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_GPFIFO_SCHEDULE: _bindgen_ty_2 = 97;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_SET_TIMESLICE: _bindgen_ty_2 = 98;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_PREEMPT: _bindgen_ty_2 = 99;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_FIFO_DISABLE_CHANNELS: _bindgen_ty_2 = 100;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_SET_TSG_INTERLEAVE_LEVEL: _bindgen_ty_2 =
101;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_SET_CHANNEL_INTERLEAVE_LEVEL: _bindgen_ty_2
= 102;
+pub const NV_VGPU_MSG_FUNCTION_GSP_RM_ALLOC: _bindgen_ty_2 = 103;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_GET_P2P_CAPS_V2: _bindgen_ty_2 = 104;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_CIPHER_AES_ENCRYPT: _bindgen_ty_2 = 105;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_CIPHER_SESSION_KEY: _bindgen_ty_2 = 106;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_CIPHER_SESSION_KEY_STATUS: _bindgen_ty_2 =
107;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_DBG_CLEAR_ALL_SM_ERROR_STATES:
_bindgen_ty_2 = 108;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_DBG_READ_ALL_SM_ERROR_STATES: _bindgen_ty_2
= 109;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_DBG_SET_EXCEPTION_MASK: _bindgen_ty_2 =
110;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_GPU_PROMOTE_CTX: _bindgen_ty_2 = 111;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_GR_CTXSW_PREEMPTION_BIND: _bindgen_ty_2 =
112;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_GR_SET_CTXSW_PREEMPTION_MODE: _bindgen_ty_2
= 113;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_GR_CTXSW_ZCULL_BIND: _bindgen_ty_2 = 114;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_GPU_INITIALIZE_CTX: _bindgen_ty_2 = 115;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_VASPACE_COPY_SERVER_RESERVED_PDES:
_bindgen_ty_2 = 116;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_FIFO_CLEAR_FAULTED_BIT: _bindgen_ty_2 =
117;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_GET_LATEST_ECC_ADDRESSES: _bindgen_ty_2 =
118;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_MC_SERVICE_INTERRUPTS: _bindgen_ty_2 = 119;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_DMA_SET_DEFAULT_VASPACE: _bindgen_ty_2 =
120;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_GET_CE_PCE_MASK: _bindgen_ty_2 = 121;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_GET_ZBC_CLEAR_TABLE_ENTRY: _bindgen_ty_2 =
122;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_GET_NVLINK_PEER_ID_MASK: _bindgen_ty_2 =
123;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_GET_NVLINK_STATUS: _bindgen_ty_2 = 124;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_GET_P2P_CAPS: _bindgen_ty_2 = 125;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_GET_P2P_CAPS_MATRIX: _bindgen_ty_2 = 126;
+pub const NV_VGPU_MSG_FUNCTION_RESERVED_0: _bindgen_ty_2 = 127;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_RESERVE_PM_AREA_SMPC: _bindgen_ty_2 = 128;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_RESERVE_HWPM_LEGACY: _bindgen_ty_2 = 129;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_B0CC_EXEC_REG_OPS: _bindgen_ty_2 = 130;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_BIND_PM_RESOURCES: _bindgen_ty_2 = 131;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_DBG_SUSPEND_CONTEXT: _bindgen_ty_2 = 132;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_DBG_RESUME_CONTEXT: _bindgen_ty_2 = 133;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_DBG_EXEC_REG_OPS: _bindgen_ty_2 = 134;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_DBG_SET_MODE_MMU_DEBUG: _bindgen_ty_2 =
135;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_DBG_READ_SINGLE_SM_ERROR_STATE:
_bindgen_ty_2 = 136;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_DBG_CLEAR_SINGLE_SM_ERROR_STATE:
_bindgen_ty_2 = 137;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_DBG_SET_MODE_ERRBAR_DEBUG: _bindgen_ty_2 =
138;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_DBG_SET_NEXT_STOP_TRIGGER_TYPE:
_bindgen_ty_2 = 139;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_ALLOC_PMA_STREAM: _bindgen_ty_2 = 140;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_PMA_STREAM_UPDATE_GET_PUT: _bindgen_ty_2 =
141;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_FB_GET_INFO_V2: _bindgen_ty_2 = 142;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_FIFO_SET_CHANNEL_PROPERTIES: _bindgen_ty_2
= 143;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_GR_GET_CTX_BUFFER_INFO: _bindgen_ty_2 =
144;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_KGR_GET_CTX_BUFFER_PTES: _bindgen_ty_2 =
145;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_GPU_EVICT_CTX: _bindgen_ty_2 = 146;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_FB_GET_FS_INFO: _bindgen_ty_2 = 147;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_GRMGR_GET_GR_FS_INFO: _bindgen_ty_2 = 148;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_STOP_CHANNEL: _bindgen_ty_2 = 149;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_GR_PC_SAMPLING_MODE: _bindgen_ty_2 = 150;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_PERF_RATED_TDP_GET_STATUS: _bindgen_ty_2 =
151;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_PERF_RATED_TDP_SET_CONTROL: _bindgen_ty_2 =
152;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_FREE_PMA_STREAM: _bindgen_ty_2 = 153;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_TIMER_SET_GR_TICK_FREQ: _bindgen_ty_2 =
154;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_FIFO_SETUP_VF_ZOMBIE_SUBCTX_PDB:
_bindgen_ty_2 = 155;
+pub const NV_VGPU_MSG_FUNCTION_GET_CONSOLIDATED_GR_STATIC_INFO: _bindgen_ty_2 =
156;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_DBG_SET_SINGLE_SM_SINGLE_STEP:
_bindgen_ty_2 = 157;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_GR_GET_TPC_PARTITION_MODE: _bindgen_ty_2 =
158;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_GR_SET_TPC_PARTITION_MODE: _bindgen_ty_2 =
159;
+pub const NV_VGPU_MSG_FUNCTION_UVM_PAGING_CHANNEL_ALLOCATE: _bindgen_ty_2 =
160;
+pub const NV_VGPU_MSG_FUNCTION_UVM_PAGING_CHANNEL_DESTROY: _bindgen_ty_2 = 161;
+pub const NV_VGPU_MSG_FUNCTION_UVM_PAGING_CHANNEL_MAP: _bindgen_ty_2 = 162;
+pub const NV_VGPU_MSG_FUNCTION_UVM_PAGING_CHANNEL_UNMAP: _bindgen_ty_2 = 163;
+pub const NV_VGPU_MSG_FUNCTION_UVM_PAGING_CHANNEL_PUSH_STREAM: _bindgen_ty_2 =
164;
+pub const NV_VGPU_MSG_FUNCTION_UVM_PAGING_CHANNEL_SET_HANDLES: _bindgen_ty_2 =
165;
+pub const NV_VGPU_MSG_FUNCTION_UVM_METHOD_STREAM_GUEST_PAGES_OPERATION:
_bindgen_ty_2 = 166;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_INTERNAL_QUIESCE_PMA_CHANNEL: _bindgen_ty_2
= 167;
+pub const NV_VGPU_MSG_FUNCTION_DCE_RM_INIT: _bindgen_ty_2 = 168;
+pub const NV_VGPU_MSG_FUNCTION_REGISTER_VIRTUAL_EVENT_BUFFER: _bindgen_ty_2 =
169;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_EVENT_BUFFER_UPDATE_GET: _bindgen_ty_2 =
170;
+pub const NV_VGPU_MSG_FUNCTION_GET_PLCABLE_ADDRESS_KIND: _bindgen_ty_2 = 171;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_PERF_LIMITS_SET_STATUS_V2: _bindgen_ty_2 =
172;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_INTERNAL_SRIOV_PROMOTE_PMA_STREAM:
_bindgen_ty_2 = 173;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_GET_MMU_DEBUG_MODE: _bindgen_ty_2 = 174;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_INTERNAL_PROMOTE_FAULT_METHOD_BUFFERS:
_bindgen_ty_2 = 175;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_FLCN_GET_CTX_BUFFER_SIZE: _bindgen_ty_2 =
176;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_FLCN_GET_CTX_BUFFER_INFO: _bindgen_ty_2 =
177;
+pub const NV_VGPU_MSG_FUNCTION_DISABLE_CHANNELS: _bindgen_ty_2 = 178;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_FABRIC_MEMORY_DESCRIBE: _bindgen_ty_2 =
179;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_FABRIC_MEM_STATS: _bindgen_ty_2 = 180;
+pub const NV_VGPU_MSG_FUNCTION_SAVE_HIBERNATION_DATA: _bindgen_ty_2 = 181;
+pub const NV_VGPU_MSG_FUNCTION_RESTORE_HIBERNATION_DATA: _bindgen_ty_2 = 182;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_INTERNAL_MEMSYS_SET_ZBC_REFERENCED:
_bindgen_ty_2 = 183;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_EXEC_PARTITIONS_CREATE: _bindgen_ty_2 =
184;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_EXEC_PARTITIONS_DELETE: _bindgen_ty_2 =
185;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_GPFIFO_GET_WORK_SUBMIT_TOKEN: _bindgen_ty_2
= 186;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_GPFIFO_SET_WORK_SUBMIT_TOKEN_NOTIF_INDEX:
_bindgen_ty_2 = 187;
+pub const
NV_VGPU_MSG_FUNCTION_PMA_SCRUBBER_SHARED_BUFFER_GUEST_PAGES_OPERATION:
_bindgen_ty_2 + 188;
+pub const
NV_VGPU_MSG_FUNCTION_CTRL_MASTER_GET_VIRTUAL_FUNCTION_ERROR_CONT_INTR_MASK:
+ _bindgen_ty_2 = 189;
+pub const NV_VGPU_MSG_FUNCTION_SET_SYSMEM_DIRTY_PAGE_TRACKING_BUFFER:
_bindgen_ty_2 = 190;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_SUBDEVICE_GET_P2P_CAPS: _bindgen_ty_2 =
191;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_BUS_SET_P2P_MAPPING: _bindgen_ty_2 = 192;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_BUS_UNSET_P2P_MAPPING: _bindgen_ty_2 = 193;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_FLA_SETUP_INSTANCE_MEM_BLOCK: _bindgen_ty_2
= 194;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_GPU_MIGRATABLE_OPS: _bindgen_ty_2 = 195;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_GET_TOTAL_HS_CREDITS: _bindgen_ty_2 = 196;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_GET_HS_CREDITS: _bindgen_ty_2 = 197;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_SET_HS_CREDITS: _bindgen_ty_2 = 198;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_PM_AREA_PC_SAMPLER: _bindgen_ty_2 = 199;
+pub const NV_VGPU_MSG_FUNCTION_INVALIDATE_TLB: _bindgen_ty_2 = 200;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_GPU_QUERY_ECC_STATUS: _bindgen_ty_2 = 201;
+pub const NV_VGPU_MSG_FUNCTION_ECC_NOTIFIER_WRITE_ACK: _bindgen_ty_2 = 202;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_DBG_GET_MODE_MMU_DEBUG: _bindgen_ty_2 =
203;
+pub const NV_VGPU_MSG_FUNCTION_RM_API_CONTROL: _bindgen_ty_2 = 204;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_CMD_INTERNAL_GPU_START_FABRIC_PROBE:
_bindgen_ty_2 = 205;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_NVLINK_GET_INBAND_RECEIVED_DATA:
_bindgen_ty_2 = 206;
+pub const NV_VGPU_MSG_FUNCTION_GET_STATIC_DATA: _bindgen_ty_2 = 207;
+pub const NV_VGPU_MSG_FUNCTION_RESERVED_208: _bindgen_ty_2 = 208;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_GPU_GET_INFO_V2: _bindgen_ty_2 = 209;
+pub const NV_VGPU_MSG_FUNCTION_GET_BRAND_CAPS: _bindgen_ty_2 = 210;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_CMD_NVLINK_INBAND_SEND_DATA: _bindgen_ty_2
= 211;
+pub const NV_VGPU_MSG_FUNCTION_UPDATE_GPM_GUEST_BUFFER_INFO: _bindgen_ty_2 =
212;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_CMD_INTERNAL_CONTROL_GSP_TRACE:
_bindgen_ty_2 = 213;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_SET_ZBC_STENCIL_CLEAR: _bindgen_ty_2 = 214;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_SUBDEVICE_GET_VGPU_HEAP_STATS:
_bindgen_ty_2 = 215;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_SUBDEVICE_GET_LIBOS_HEAP_STATS:
_bindgen_ty_2 = 216;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_DBG_SET_MODE_MMU_GCC_DEBUG: _bindgen_ty_2 =
217;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_DBG_GET_MODE_MMU_GCC_DEBUG: _bindgen_ty_2 =
218;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_RESERVE_HES: _bindgen_ty_2 = 219;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_RELEASE_HES: _bindgen_ty_2 = 220;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_RESERVE_CCU_PROF: _bindgen_ty_2 = 221;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_RELEASE_CCU_PROF: _bindgen_ty_2 = 222;
+pub const NV_VGPU_MSG_FUNCTION_RESERVED: _bindgen_ty_2 = 223;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_CMD_GET_CHIPLET_HS_CREDIT_POOL:
_bindgen_ty_2 = 224;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_CMD_GET_HS_CREDITS_MAPPING: _bindgen_ty_2 =
225;
+pub const NV_VGPU_MSG_FUNCTION_CTRL_EXEC_PARTITIONS_EXPORT: _bindgen_ty_2 =
226;
+pub const NV_VGPU_MSG_FUNCTION_NUM_FUNCTIONS: _bindgen_ty_2 = 227;
+pub type _bindgen_ty_2 = ffi::c_uint;
+pub const NV_VGPU_MSG_EVENT_FIRST_EVENT: _bindgen_ty_3 = 4096;
+pub const NV_VGPU_MSG_EVENT_GSP_INIT_DONE: _bindgen_ty_3 = 4097;
+pub const NV_VGPU_MSG_EVENT_GSP_RUN_CPU_SEQUENCER: _bindgen_ty_3 = 4098;
+pub const NV_VGPU_MSG_EVENT_POST_EVENT: _bindgen_ty_3 = 4099;
+pub const NV_VGPU_MSG_EVENT_RC_TRIGGERED: _bindgen_ty_3 = 4100;
+pub const NV_VGPU_MSG_EVENT_MMU_FAULT_QUEUED: _bindgen_ty_3 = 4101;
+pub const NV_VGPU_MSG_EVENT_OS_ERROR_LOG: _bindgen_ty_3 = 4102;
+pub const NV_VGPU_MSG_EVENT_RG_LINE_INTR: _bindgen_ty_3 = 4103;
+pub const NV_VGPU_MSG_EVENT_GPUACCT_PERFMON_UTIL_SAMPLES: _bindgen_ty_3 = 4104;
+pub const NV_VGPU_MSG_EVENT_SIM_READ: _bindgen_ty_3 = 4105;
+pub const NV_VGPU_MSG_EVENT_SIM_WRITE: _bindgen_ty_3 = 4106;
+pub const NV_VGPU_MSG_EVENT_SEMAPHORE_SCHEDULE_CALLBACK: _bindgen_ty_3 = 4107;
+pub const NV_VGPU_MSG_EVENT_UCODE_LIBOS_PRINT: _bindgen_ty_3 = 4108;
+pub const NV_VGPU_MSG_EVENT_VGPU_GSP_PLUGIN_TRIGGERED: _bindgen_ty_3 = 4109;
+pub const NV_VGPU_MSG_EVENT_PERF_GPU_BOOST_SYNC_LIMITS_CALLBACK: _bindgen_ty_3
= 4110;
+pub const NV_VGPU_MSG_EVENT_PERF_BRIDGELESS_INFO_UPDATE: _bindgen_ty_3 = 4111;
+pub const NV_VGPU_MSG_EVENT_VGPU_CONFIG: _bindgen_ty_3 = 4112;
+pub const NV_VGPU_MSG_EVENT_DISPLAY_MODESET: _bindgen_ty_3 = 4113;
+pub const NV_VGPU_MSG_EVENT_EXTDEV_INTR_SERVICE: _bindgen_ty_3 = 4114;
+pub const NV_VGPU_MSG_EVENT_NVLINK_INBAND_RECEIVED_DATA_256: _bindgen_ty_3 =
4115;
+pub const NV_VGPU_MSG_EVENT_NVLINK_INBAND_RECEIVED_DATA_512: _bindgen_ty_3 =
4116;
+pub const NV_VGPU_MSG_EVENT_NVLINK_INBAND_RECEIVED_DATA_1024: _bindgen_ty_3 =
4117;
+pub const NV_VGPU_MSG_EVENT_NVLINK_INBAND_RECEIVED_DATA_2048: _bindgen_ty_3 =
4118;
+pub const NV_VGPU_MSG_EVENT_NVLINK_INBAND_RECEIVED_DATA_4096: _bindgen_ty_3 =
4119;
+pub const NV_VGPU_MSG_EVENT_TIMED_SEMAPHORE_RELEASE: _bindgen_ty_3 = 4120;
+pub const NV_VGPU_MSG_EVENT_NVLINK_IS_GPU_DEGRADED: _bindgen_ty_3 = 4121;
+pub const NV_VGPU_MSG_EVENT_PFM_REQ_HNDLR_STATE_SYNC_CALLBACK: _bindgen_ty_3 =
4122;
+pub const NV_VGPU_MSG_EVENT_NVLINK_FAULT_UP: _bindgen_ty_3 = 4123;
+pub const NV_VGPU_MSG_EVENT_GSP_LOCKDOWN_NOTICE: _bindgen_ty_3 = 4124;
+pub const NV_VGPU_MSG_EVENT_MIG_CI_CONFIG_UPDATE: _bindgen_ty_3 = 4125;
+pub const NV_VGPU_MSG_EVENT_UPDATE_GSP_TRACE: _bindgen_ty_3 = 4126;
+pub const NV_VGPU_MSG_EVENT_NVLINK_FATAL_ERROR_RECOVERY: _bindgen_ty_3 = 4127;
+pub const NV_VGPU_MSG_EVENT_GSP_POST_NOCAT_RECORD: _bindgen_ty_3 = 4128;
+pub const NV_VGPU_MSG_EVENT_FECS_ERROR: _bindgen_ty_3 = 4129;
+pub const NV_VGPU_MSG_EVENT_RECOVERY_ACTION: _bindgen_ty_3 = 4130;
+pub const NV_VGPU_MSG_EVENT_NUM_EVENTS: _bindgen_ty_3 = 4131;
+pub type _bindgen_ty_3 = ffi::c_uint;
+#[repr(C)]
+#[derive(Debug, Default, Copy, Clone, Zeroable)]
+pub struct DOD_METHOD_DATA {
+ pub status: u32_,
+ pub acpiIdListLen: u32_,
+ pub acpiIdList: [u32_; 16usize],
+}
+#[repr(C)]
+#[derive(Debug, Default, Copy, Clone, Zeroable)]
+pub struct JT_METHOD_DATA {
+ pub status: u32_,
+ pub jtCaps: u32_,
+ pub jtRevId: u16_,
+ pub bSBIOSCaps: u8_,
+ pub __bindgen_padding_0: u8,
+}
+#[repr(C)]
+#[derive(Debug, Default, Copy, Clone, Zeroable)]
+pub struct MUX_METHOD_DATA_ELEMENT {
+ pub acpiId: u32_,
+ pub mode: u32_,
+ pub status: u32_,
+}
+#[repr(C)]
+#[derive(Debug, Default, Copy, Clone, Zeroable)]
+pub struct MUX_METHOD_DATA {
+ pub tableLen: u32_,
+ pub acpiIdMuxModeTable: [MUX_METHOD_DATA_ELEMENT; 16usize],
+ pub acpiIdMuxPartTable: [MUX_METHOD_DATA_ELEMENT; 16usize],
+ pub acpiIdMuxStateTable: [MUX_METHOD_DATA_ELEMENT; 16usize],
+}
+#[repr(C)]
+#[derive(Debug, Default, Copy, Clone, Zeroable)]
+pub struct CAPS_METHOD_DATA {
+ pub status: u32_,
+ pub optimusCaps: u32_,
+}
+#[repr(C)]
+#[derive(Debug, Default, Copy, Clone, Zeroable)]
+pub struct ACPI_METHOD_DATA {
+ pub bValid: u8_,
+ pub __bindgen_padding_0: [u8; 3usize],
+ pub dodMethodData: DOD_METHOD_DATA,
+ pub jtMethodData: JT_METHOD_DATA,
+ pub muxMethodData: MUX_METHOD_DATA,
+ pub capsMethodData: CAPS_METHOD_DATA,
+}
+#[repr(C)]
+#[derive(Debug, Default, Copy, Clone, Zeroable)]
+pub struct BUSINFO {
+ pub deviceID: u16_,
+ pub vendorID: u16_,
+ pub subdeviceID: u16_,
+ pub subvendorID: u16_,
+ pub revisionID: u8_,
+ pub __bindgen_padding_0: u8,
+}
+#[repr(C)]
+#[derive(Debug, Default, Copy, Clone, Zeroable)]
+pub struct GSP_VF_INFO {
+ pub totalVFs: u32_,
+ pub firstVFOffset: u32_,
+ pub FirstVFBar0Address: u64_,
+ pub FirstVFBar1Address: u64_,
+ pub FirstVFBar2Address: u64_,
+ pub b64bitBar0: u8_,
+ pub b64bitBar1: u8_,
+ pub b64bitBar2: u8_,
+ pub __bindgen_padding_0: [u8; 5usize],
+}
+#[repr(C)]
+#[derive(Debug, Default, Copy, Clone, Zeroable)]
+pub struct GSP_PCIE_CONFIG_REG {
+ pub linkCap: u32_,
+}
+#[repr(C)]
+#[derive(Debug, Default, Copy, Clone, Zeroable)]
+pub struct GspSystemInfo {
+ pub gpuPhysAddr: u64_,
+ pub gpuPhysFbAddr: u64_,
+ pub gpuPhysInstAddr: u64_,
+ pub gpuPhysIoAddr: u64_,
+ pub nvDomainBusDeviceFunc: u64_,
+ pub simAccessBufPhysAddr: u64_,
+ pub notifyOpSharedSurfacePhysAddr: u64_,
+ pub pcieAtomicsOpMask: u64_,
+ pub consoleMemSize: u64_,
+ pub maxUserVa: u64_,
+ pub pciConfigMirrorBase: u32_,
+ pub pciConfigMirrorSize: u32_,
+ pub PCIDeviceID: u32_,
+ pub PCISubDeviceID: u32_,
+ pub PCIRevisionID: u32_,
+ pub pcieAtomicsCplDeviceCapMask: u32_,
+ pub oorArch: u8_,
+ pub __bindgen_padding_0: [u8; 7usize],
+ pub clPdbProperties: u64_,
+ pub Chipset: u32_,
+ pub bGpuBehindBridge: u8_,
+ pub bFlrSupported: u8_,
+ pub b64bBar0Supported: u8_,
+ pub bMnocAvailable: u8_,
+ pub chipsetL1ssEnable: u32_,
+ pub bUpstreamL0sUnsupported: u8_,
+ pub bUpstreamL1Unsupported: u8_,
+ pub bUpstreamL1PorSupported: u8_,
+ pub bUpstreamL1PorMobileOnly: u8_,
+ pub bSystemHasMux: u8_,
+ pub upstreamAddressValid: u8_,
+ pub FHBBusInfo: BUSINFO,
+ pub chipsetIDInfo: BUSINFO,
+ pub __bindgen_padding_1: [u8; 2usize],
+ pub acpiMethodData: ACPI_METHOD_DATA,
+ pub hypervisorType: u32_,
+ pub bIsPassthru: u8_,
+ pub __bindgen_padding_2: [u8; 7usize],
+ pub sysTimerOffsetNs: u64_,
+ pub gspVFInfo: GSP_VF_INFO,
+ pub bIsPrimary: u8_,
+ pub isGridBuild: u8_,
+ pub __bindgen_padding_3: [u8; 2usize],
+ pub pcieConfigReg: GSP_PCIE_CONFIG_REG,
+ pub gridBuildCsp: u32_,
+ pub bPreserveVideoMemoryAllocations: u8_,
+ pub bTdrEventSupported: u8_,
+ pub bFeatureStretchVblankCapable: u8_,
+ pub bEnableDynamicGranularityPageArrays: u8_,
+ pub bClockBoostSupported: u8_,
+ pub bRouteDispIntrsToCPU: u8_,
+ pub __bindgen_padding_4: [u8; 6usize],
+ pub hostPageSize: u64_,
+}
+#[repr(C)]
+#[derive(Debug, Default, Copy, Clone, Zeroable)]
+pub struct MESSAGE_QUEUE_INIT_ARGUMENTS {
+ pub sharedMemPhysAddr: u64_,
+ pub pageTableEntryCount: u32_,
+ pub __bindgen_padding_0: [u8; 4usize],
+ pub cmdQueueOffset: u64_,
+ pub statQueueOffset: u64_,
+}
+#[repr(C)]
+#[derive(Debug, Default, Copy, Clone, Zeroable)]
+pub struct GSP_SR_INIT_ARGUMENTS {
+ pub oldLevel: u32_,
+ pub flags: u32_,
+ pub bInPMTransition: u8_,
+ pub __bindgen_padding_0: [u8; 3usize],
+}
+#[repr(C)]
+#[derive(Debug, Default, Copy, Clone, Zeroable)]
+pub struct GSP_ARGUMENTS_CACHED {
+ pub messageQueueInitArguments: MESSAGE_QUEUE_INIT_ARGUMENTS,
+ pub srInitArguments: GSP_SR_INIT_ARGUMENTS,
+ pub gpuInstance: u32_,
+ pub bDmemStack: u8_,
+ pub __bindgen_padding_0: [u8; 7usize],
+ pub profilerArgs: GSP_ARGUMENTS_CACHED__bindgen_ty_1,
+}
+#[repr(C)]
+#[derive(Debug, Default, Copy, Clone, Zeroable)]
+pub struct GSP_ARGUMENTS_CACHED__bindgen_ty_1 {
+ pub pa: u64_,
+ pub size: u64_,
+}
+#[repr(C)]
+#[derive(Copy, Clone, Zeroable)]
+pub union rpc_message_rpc_union_field_v03_00 {
+ pub spare: u32_,
+ pub cpuRmGfid: u32_,
+}
+impl Default for rpc_message_rpc_union_field_v03_00 {
+ fn default() -> Self {
+ let mut s = ::core::mem::MaybeUninit::<Self>::uninit();
+ unsafe {
+ ::core::ptr::write_bytes(s.as_mut_ptr(), 0, 1);
+ s.assume_init()
+ }
+ }
+}
+pub type rpc_message_rpc_union_field_v = rpc_message_rpc_union_field_v03_00;
+#[repr(C)]
+pub struct rpc_message_header_v03_00 {
+ pub header_version: u32_,
+ pub signature: u32_,
+ pub length: u32_,
+ pub function: u32_,
+ pub rpc_result: u32_,
+ pub rpc_result_private: u32_,
+ pub sequence: u32_,
+ pub u: rpc_message_rpc_union_field_v,
+ pub rpc_message_data: __IncompleteArrayField<u8_>,
+}
+impl Default for rpc_message_header_v03_00 {
+ fn default() -> Self {
+ let mut s = ::core::mem::MaybeUninit::<Self>::uninit();
+ unsafe {
+ ::core::ptr::write_bytes(s.as_mut_ptr(), 0, 1);
+ s.assume_init()
+ }
+ }
+}
+pub type rpc_message_header_v = rpc_message_header_v03_00;
#[repr(C)]
#[derive(Copy, Clone, Zeroable)]
pub struct GspFwWprMeta {
@@ -145,3 +648,41 @@ pub struct LibosMemoryRegionInitArgument {
pub loc: u8_,
pub __bindgen_padding_0: [u8; 6usize],
}
+#[repr(C)]
+#[derive(Debug, Default, Copy, Clone, Zeroable)]
+pub struct msgqTxHeader {
+ pub version: u32_,
+ pub size: u32_,
+ pub msgSize: u32_,
+ pub msgCount: u32_,
+ pub writePtr: u32_,
+ pub flags: u32_,
+ pub rxHdrOff: u32_,
+ pub entryOff: u32_,
+}
+#[repr(C)]
+#[derive(Debug, Default, Copy, Clone, Zeroable)]
+pub struct msgqRxHeader {
+ pub readPtr: u32_,
+}
+#[repr(C)]
+#[repr(align(8))]
+#[derive(Zeroable)]
+pub struct GSP_MSG_QUEUE_ELEMENT {
+ pub authTagBuffer: [u8_; 16usize],
+ pub aadBuffer: [u8_; 16usize],
+ pub checkSum: u32_,
+ pub seqNum: u32_,
+ pub elemCount: u32_,
+ pub __bindgen_padding_0: [u8; 4usize],
+ pub rpc: rpc_message_header_v,
+}
+impl Default for GSP_MSG_QUEUE_ELEMENT {
+ fn default() -> Self {
+ let mut s = ::core::mem::MaybeUninit::<Self>::uninit();
+ unsafe {
+ ::core::ptr::write_bytes(s.as_mut_ptr(), 0, 1);
+ s.assume_init()
+ }
+ }
+}
--
2.50.1
Alistair Popple
2025-Oct-13 06:20 UTC
[PATCH v5 07/14] gpu: nova-core: gsp: Add GSP command queue handling
This commit introduces core infrastructure for handling GSP command and
message queues in the nova-core driver. The command queue system enables
bidirectional communication between the host driver and GSP firmware
through a remote message passing interface.
The interface is based on passing serialised data structures over a ring
buffer with separate transmit and receive queues. Commands are sent by
writing to the CPU transmit queue and waiting for completion via the
receive queue.
To ensure safety mutable or immutable (depending on whether it is a send
or receive operation) references are taken on the command queue when
allocating the message to write/read to. This ensures message memory
remains valid and the command queue can't be mutated whilst an operation
is in progress.
Currently this is only used by the probe() routine and therefore can
only used by a single thread of execution. Locking to enable safe access
from multiple threads will be introduced in a future series when that
becomes necessary.
Signed-off-by: Alistair Popple <apopple at nvidia.com>
---
Changes for v4:
- Use read_poll_timeout() instead of wait_on()
- Switch to using `init!` (Thanks Alex for figuring out the
required workarounds)
- Pass the enum type into the RPC bindings instead of a raw u32
- Fixup the TODOs for extracting/allocating the send command regions
- Split the sending functions into one taking just a command struct and
another taking a command struct with extra payload
Changes for v3:
- Reduce the receive payloads to the correct size
- Use opaque bindings
- Clean up of the command queue PTE creation
- Add an enum for the GSP functions
- Rename GspCommandToGsp and GspMessageFromGsp
- Rename GspCmdq
- Add function to notify GSP of updated queue pointers
- Inline driver area access functions
- Fixup receive area calculations
Changes for v2:
- Rebased on Alex's latest series
---
drivers/gpu/nova-core/gsp.rs | 9 +
drivers/gpu/nova-core/gsp/cmdq.rs | 493 ++++++++++++++++++++++++++++++
drivers/gpu/nova-core/regs.rs | 4 +
drivers/gpu/nova-core/sbuffer.rs | 2 -
scripts/Makefile.build | 2 +-
5 files changed, 507 insertions(+), 3 deletions(-)
create mode 100644 drivers/gpu/nova-core/gsp/cmdq.rs
diff --git a/drivers/gpu/nova-core/gsp.rs b/drivers/gpu/nova-core/gsp.rs
index 554eb1a34ee7..1d472c5fad7a 100644
--- a/drivers/gpu/nova-core/gsp.rs
+++ b/drivers/gpu/nova-core/gsp.rs
@@ -2,6 +2,7 @@
mod boot;
+use kernel::alloc::flags::GFP_KERNEL;
use kernel::device;
use kernel::dma::CoherentAllocation;
use kernel::dma::DmaAddress;
@@ -11,6 +12,7 @@
use kernel::transmute::AsBytes;
use crate::fb::FbLayout;
+use crate::gsp::cmdq::Cmdq;
pub(crate) use fw::{GspFwWprMeta, LibosParams};
@@ -18,6 +20,8 @@
use fw::LibosMemoryRegionInitArgument;
+pub(crate) mod cmdq;
+
pub(crate) const GSP_PAGE_SHIFT: usize = 12;
pub(crate) const GSP_PAGE_SIZE: usize = 1 << GSP_PAGE_SHIFT;
@@ -31,6 +35,7 @@ pub(crate) struct Gsp {
loginit: LogBuffer,
logintr: LogBuffer,
logrm: LogBuffer,
+ pub(crate) cmdq: Cmdq,
}
#[repr(C)]
@@ -110,11 +115,15 @@ pub(crate) fn new(pdev:
&pci::Device<device::Bound>) -> Result<impl PinInit<Self
let logrm = LogBuffer::new(dev)?;
dma_write!(libos[2] =
LibosMemoryRegionInitArgument::new("LOGRM", &logrm.0)?)?;
+ // Creates its own PTE array.
+ let cmdq = Cmdq::new(dev)?;
+
Ok(try_pin_init!(Self {
libos,
loginit,
logintr,
logrm,
+ cmdq,
}))
}
}
diff --git a/drivers/gpu/nova-core/gsp/cmdq.rs
b/drivers/gpu/nova-core/gsp/cmdq.rs
new file mode 100644
index 000000000000..3f8cb7a35922
--- /dev/null
+++ b/drivers/gpu/nova-core/gsp/cmdq.rs
@@ -0,0 +1,493 @@
+// SPDX-License-Identifier: GPL-2.0
+
+use core::mem::offset_of;
+use core::sync::atomic::fence;
+use core::sync::atomic::Ordering;
+
+use kernel::alloc::flags::GFP_KERNEL;
+use kernel::device;
+use kernel::dma::CoherentAllocation;
+use kernel::dma_write;
+use kernel::io::poll::read_poll_timeout;
+use kernel::prelude::*;
+use kernel::sync::aref::ARef;
+use kernel::time::Delta;
+use kernel::transmute::{AsBytes, FromBytes};
+
+use crate::driver::Bar0;
+use crate::gsp::fw::{GspMsgElement, MsgFunction, MsgqRxHeader, MsgqTxHeader};
+use crate::gsp::PteArray;
+use crate::gsp::{GSP_PAGE_SHIFT, GSP_PAGE_SIZE};
+use crate::regs::NV_PGSP_QUEUE_HEAD;
+use crate::sbuffer::SBufferIter;
+
+pub(crate) trait CommandToGsp: Sized + FromBytes + AsBytes {
+ const FUNCTION: MsgFunction;
+}
+
+pub(crate) trait CommandToGspWithPayload: CommandToGsp {}
+
+pub(crate) trait MessageFromGsp: Sized + FromBytes + AsBytes {
+ const FUNCTION: MsgFunction;
+}
+
+/// Number of GSP pages making the Msgq.
+pub(crate) const MSGQ_NUM_PAGES: u32 = 0x3f;
+
+#[repr(C, align(0x1000))]
+#[derive(Debug)]
+struct MsgqData {
+ data: [[u8; GSP_PAGE_SIZE]; MSGQ_NUM_PAGES as usize],
+}
+
+// Annoyingly there is no real equivalent of #define so we're forced to use
a
+// literal to specify the alignment above. So check that against the actual GSP
+// page size here.
+static_assert!(align_of::<MsgqData>() == GSP_PAGE_SIZE);
+
+// There is no struct defined for this in the open-gpu-kernel-source headers.
+// Instead it is defined by code in GspMsgQueuesInit().
+#[repr(C)]
+struct Msgq {
+ tx: MsgqTxHeader,
+ rx: MsgqRxHeader,
+ msgq: MsgqData,
+}
+
+#[repr(C)]
+struct GspMem {
+ ptes: PteArray<{ GSP_PAGE_SIZE / size_of::<u64>() }>,
+ cpuq: Msgq,
+ gspq: Msgq,
+}
+
+// SAFETY: These structs don't meet the no-padding requirements of AsBytes
but
+// that is not a problem because they are not used outside the kernel.
+unsafe impl AsBytes for GspMem {}
+
+// SAFETY: These structs don't meet the no-padding requirements of
FromBytes but
+// that is not a problem because they are not used outside the kernel.
+unsafe impl FromBytes for GspMem {}
+
+/// `GspMem` struct that is shared with the GSP.
+struct DmaGspMem(CoherentAllocation<GspMem>);
+
+impl DmaGspMem {
+ fn new(dev: &device::Device<device::Bound>) ->
Result<Self> {
+ const MSGQ_SIZE: u32 = size_of::<Msgq>() as u32;
+ const RX_HDR_OFF: u32 = offset_of!(Msgq, rx) as u32;
+
+ let gsp_mem +
CoherentAllocation::<GspMem>::alloc_coherent(dev, 1, GFP_KERNEL |
__GFP_ZERO)?;
+ dma_write!(gsp_mem[0].ptes = PteArray::new(gsp_mem.dma_handle())?)?;
+ dma_write!(gsp_mem[0].cpuq.tx = MsgqTxHeader::new(MSGQ_SIZE,
RX_HDR_OFF, MSGQ_NUM_PAGES))?;
+ dma_write!(gsp_mem[0].cpuq.rx = MsgqRxHeader::new())?;
+
+ Ok(Self(gsp_mem))
+ }
+
+ // Allocates the various regions for the command and reduces the payload
size
+ // to match what is needed for the command.
+ //
+ // # Errors
+ //
+ // Returns `Err(EAGAIN)` if the driver area is too small to hold the
+ // requested command.
+ fn allocate_command_regions<'a, M: CommandToGsp>(
+ &'a mut self,
+ payload_size: usize,
+ ) -> Result<(&'a mut GspMsgElement, &'a mut M,
&'a mut [u8], &'a mut [u8])> {
+ let driver_area = self.driver_write_area();
+ let msg_size = size_of::<GspMsgElement>() + size_of::<M>()
+ payload_size;
+ let driver_area_size = (driver_area.0.len() + driver_area.1.len())
<< GSP_PAGE_SHIFT;
+
+ if msg_size > driver_area_size {
+ return Err(EAGAIN);
+ }
+
+ #[allow(clippy::incompatible_msrv)]
+ let (msg_header_slice, slice_1) = driver_area
+ .0
+ .as_flattened_mut()
+ .split_at_mut(size_of::<GspMsgElement>());
+ let msg_header =
GspMsgElement::from_bytes_mut(msg_header_slice).ok_or(EINVAL)?;
+ let (cmd_slice, payload_1) =
slice_1.split_at_mut(size_of::<M>());
+ let cmd = M::from_bytes_mut(cmd_slice).ok_or(EINVAL)?;
+
+ #[allow(clippy::incompatible_msrv)]
+ let payload_2 = driver_area.1.as_flattened_mut();
+
+ let (payload_1, payload_2) = if payload_1.len() > payload_size {
+ // Payload fits entirely in payload_1
+ (&mut payload_1[..payload_size], &mut payload_2[0..0])
+ } else {
+ // Need all of payload_1 and some of payload_2
+ let payload_2_len = payload_size - payload_1.len();
+ (payload_1, &mut payload_2[..payload_2_len])
+ };
+
+ Ok((msg_header, cmd, payload_1, payload_2))
+ }
+
+ fn driver_write_area(&mut self) -> (&mut [[u8; GSP_PAGE_SIZE]],
&mut [[u8; GSP_PAGE_SIZE]]) {
+ let tx = self.cpu_write_ptr() as usize;
+ let rx = self.gsp_read_ptr() as usize;
+
+ // SAFETY:
+ // - The [`CoherentAllocation`] contains exactly one object.
+ // - We will only access the driver-owned part of the shared memory.
+ // - Per the safety statement of the function, no concurrent access
will be performed.
+ let gsp_mem = &mut unsafe { self.0.as_slice_mut(0, 1)
}.unwrap()[0];
+ let (before_tx, after_tx) = gsp_mem.cpuq.msgq.data.split_at_mut(tx);
+
+ if rx <= tx {
+ // The area from `tx` up to the end of the ring, and from the
beginning of the ring up
+ // to `rx`, minus one unit, belongs to the driver.
+ if rx == 0 {
+ let last = after_tx.len() - 1;
+ (&mut after_tx[..last], &mut before_tx[0..0])
+ } else {
+ (after_tx, &mut before_tx[..rx])
+ }
+ } else {
+ // The area from `tx` to `rx`, minus one unit, belongs to the
driver.
+ (after_tx.split_at_mut(rx - tx).0, &mut before_tx[0..0])
+ }
+ }
+
+ fn driver_read_area(&self) -> (&[[u8; GSP_PAGE_SIZE]],
&[[u8; GSP_PAGE_SIZE]]) {
+ let tx = self.gsp_write_ptr() as usize;
+ let rx = self.cpu_read_ptr() as usize;
+
+ // SAFETY:
+ // - The [`CoherentAllocation`] contains exactly one object.
+ // - We will only access the driver-owned part of the shared memory.
+ // - Per the safety statement of the function, no concurrent access
will be performed.
+ let gsp_mem = &unsafe { self.0.as_slice(0, 1) }.unwrap()[0];
+ let (before_rx, after_rx) = gsp_mem.gspq.msgq.data.split_at(rx);
+
+ if tx == rx {
+ (&after_rx[0..0], &after_rx[0..0])
+ } else if tx > rx {
+ (&after_rx[..tx], &before_rx[0..0])
+ } else {
+ (after_rx, &before_rx[..tx])
+ }
+ }
+
+ fn gsp_write_ptr(&self) -> u32 {
+ let gsp_mem = self.0.start_ptr();
+
+ // SAFETY:
+ // - The ['CoherentAllocation'] contains at least one object.
+ // - By the invariants of CoherentAllocation the pointer is valid.
+ (unsafe { (*gsp_mem).gspq.tx.write_ptr() } % MSGQ_NUM_PAGES)
+ // dma_read!(gsp_mem[0].gspq.tx.writePtr).unwrap() % MSGQ_NUM_PAGES
+ }
+
+ fn gsp_read_ptr(&self) -> u32 {
+ let gsp_mem = self.0.start_ptr();
+
+ // SAFETY:
+ // - The ['CoherentAllocation'] contains at least one object.
+ // - By the invariants of CoherentAllocation the pointer is valid.
+ (unsafe { (*gsp_mem).gspq.rx.read_ptr() } % MSGQ_NUM_PAGES)
+ }
+
+ fn cpu_read_ptr(&self) -> u32 {
+ let gsp_mem = self.0.start_ptr();
+
+ // SAFETY:
+ // - The ['CoherentAllocation'] contains at least one object.
+ // - By the invariants of CoherentAllocation the pointer is valid.
+ (unsafe { (*gsp_mem).cpuq.rx.read_ptr() } % MSGQ_NUM_PAGES)
+ }
+
+ /// Inform the GSP that it can send `elem_count` new pages into the message
queue.
+ fn advance_cpu_read_ptr(&mut self, elem_count: u32) {
+ // let gsp_mem = &self.0;
+ let rptr = self.cpu_read_ptr().wrapping_add(elem_count) %
MSGQ_NUM_PAGES;
+
+ // Ensure read pointer is properly ordered
+ fence(Ordering::SeqCst);
+
+ let gsp_mem = self.0.start_ptr_mut();
+
+ // SAFETY:
+ // - The ['CoherentAllocation'] contains at least one object.
+ // - By the invariants of CoherentAllocation the pointer is valid.
+ unsafe { (*gsp_mem).cpuq.rx.set_read_ptr(rptr) };
+ }
+
+ fn cpu_write_ptr(&self) -> u32 {
+ let gsp_mem = self.0.start_ptr();
+
+ // SAFETY:
+ // - The ['CoherentAllocation'] contains at least one object.
+ // - By the invariants of CoherentAllocation the pointer is valid.
+ (unsafe { (*gsp_mem).cpuq.tx.write_ptr() } % MSGQ_NUM_PAGES)
+ }
+
+ /// Inform the GSP that it can process `elem_count` new pages from the
command queue.
+ fn advance_cpu_write_ptr(&mut self, elem_count: u32) {
+ let wptr = self.cpu_write_ptr().wrapping_add(elem_count) &
MSGQ_NUM_PAGES;
+ let gsp_mem = self.0.start_ptr_mut();
+
+ // SAFETY:
+ // - The ['CoherentAllocation'] contains at least one object.
+ // - By the invariants of CoherentAllocation the pointer is valid.
+ unsafe { (*gsp_mem).cpuq.tx.set_write_ptr(wptr) };
+
+ // Ensure all command data is visible before triggering the GSP read
+ fence(Ordering::SeqCst);
+ }
+}
+
+pub(crate) struct Cmdq {
+ dev: ARef<device::Device>,
+ seq: u32,
+ gsp_mem: DmaGspMem,
+ pub _nr_ptes: u32,
+}
+
+impl Cmdq {
+ pub(crate) fn new(dev: &device::Device<device::Bound>) ->
Result<Cmdq> {
+ let gsp_mem = DmaGspMem::new(dev)?;
+ let nr_ptes = size_of::<GspMem>() >> GSP_PAGE_SHIFT;
+ build_assert!(nr_ptes * size_of::<u64>() <= GSP_PAGE_SIZE);
+
+ Ok(Cmdq {
+ dev: dev.into(),
+ seq: 0,
+ gsp_mem,
+ _nr_ptes: nr_ptes as u32,
+ })
+ }
+
+ fn calculate_checksum<T: Iterator<Item = u8>>(it: T) -> u32
{
+ let sum64 = it
+ .enumerate()
+ .map(|(idx, byte)| (((idx % 8) * 8) as u32, byte))
+ .fold(0, |acc, (rol, byte)| acc ^
u64::from(byte).rotate_left(rol));
+
+ ((sum64 >> 32) as u32) ^ (sum64 as u32)
+ }
+
+ // Notify GSP that we have updated the command queue pointers.
+ fn notify_gsp(bar: &Bar0) {
+ NV_PGSP_QUEUE_HEAD::default().set_address(0).write(bar);
+ }
+
+ #[expect(unused)]
+ pub(crate) fn send_gsp_command<M, E>(&mut self, bar: &Bar0,
init: impl Init<M, E>) -> Result
+ where
+ M: CommandToGsp,
+ // This allows all error types, including `Infallible`, to be used with
`init`. Without
+ // this we cannot use regular stack objects as `init` since their
`Init` implementation
+ // does not return any error.
+ Error: From<E>,
+ {
+ #[repr(C)]
+ struct FullCommand<M> {
+ hdr: GspMsgElement,
+ cmd: M,
+ }
+ let (msg_header, cmd, _, _) =
self.gsp_mem.allocate_command_regions::<M>(0)?;
+
+ let seq = self.seq;
+ let initializer = try_init!(FullCommand {
+ hdr <- GspMsgElement::init(seq, size_of::<M>(),
M::FUNCTION),
+ cmd <- init,
+ });
+
+ // Fill the header and command in-place.
+ // SAFETY:
+ // - allocate_command_regions guarantees msg_header points to enough
+ // space in the command queue to contain FullCommand
+ // - __init ensures the command header and struct a fully initialized
+ unsafe {
+ initializer.__init(msg_header.as_bytes_mut().as_mut_ptr().cast())?;
+ }
+
+
msg_header.set_checksum(Cmdq::calculate_checksum(SBufferIter::new_reader([
+ msg_header.as_bytes(),
+ cmd.as_bytes(),
+ ])));
+
+ dev_info!(
+ &self.dev,
+ "GSP RPC: send: seq# {}, function=0x{:x} ({}),
length=0x{:x}\n",
+ self.seq,
+ msg_header.function_number(),
+ msg_header.function()?,
+ msg_header.length(),
+ );
+
+ let elem_count = msg_header.element_count();
+ self.seq += 1;
+ self.gsp_mem.advance_cpu_write_ptr(elem_count);
+ Cmdq::notify_gsp(bar);
+
+ Ok(())
+ }
+
+ #[expect(unused)]
+ pub(crate) fn send_gsp_command_with_payload<M, E>(
+ &mut self,
+ bar: &Bar0,
+ payload_size: usize,
+ init: impl Init<M, E>,
+ init_payload: impl
FnOnce(SBufferIter<core::array::IntoIter<&mut [u8], 2>>) ->
Result,
+ ) -> Result
+ where
+ M: CommandToGspWithPayload,
+ // This allows all error types, including `Infallible`, to be used with
`init`. Without
+ // this we cannot use regular stack objects as `init` since their
`Init` implementation
+ // does not return any error.
+ Error: From<E>,
+ {
+ #[repr(C)]
+ struct FullCommand<M> {
+ hdr: GspMsgElement,
+ cmd: M,
+ }
+ let (msg_header, cmd, payload_1, payload_2) +
self.gsp_mem.allocate_command_regions::<M>(payload_size)?;
+
+ let seq = self.seq;
+ let initializer = try_init!(FullCommand {
+ hdr <- GspMsgElement::init(seq, size_of::<M>() +
payload_size, M::FUNCTION),
+ cmd <- init,
+ });
+
+ // Fill the header and command in-place.
+ // SAFETY:
+ // - allocate_command_regions guarantees msg_header points to enough
+ // space in the command queue to contain FullCommand
+ // - __init ensures the command header and struct a fully initialized
+ unsafe {
+ initializer.__init(msg_header.as_bytes_mut().as_mut_ptr().cast())?;
+ }
+
+ // Fill the payload
+ let sbuffer = SBufferIter::new_writer([&mut payload_1[..], &mut
payload_2[..]]);
+ init_payload(sbuffer)?;
+
+
msg_header.set_checksum(Cmdq::calculate_checksum(SBufferIter::new_reader([
+ msg_header.as_bytes(),
+ cmd.as_bytes(),
+ payload_1,
+ payload_2,
+ ])));
+
+ dev_info!(
+ &self.dev,
+ "GSP RPC: send: seq# {}, function=0x{:x} ({}),
length=0x{:x}\n",
+ self.seq,
+ msg_header.function_number(),
+ msg_header.function()?,
+ msg_header.length(),
+ );
+
+ let elem_count = msg_header.element_count();
+ self.seq += 1;
+ self.gsp_mem.advance_cpu_write_ptr(elem_count);
+ Cmdq::notify_gsp(bar);
+
+ Ok(())
+ }
+
+ #[expect(unused)]
+ pub(crate) fn receive_msg_from_gsp<M: MessageFromGsp, R>(
+ &mut self,
+ timeout: Delta,
+ init: impl FnOnce(&M,
SBufferIter<core::array::IntoIter<&[u8], 2>>) ->
Result<R>,
+ ) -> Result<R> {
+ let driver_area = read_poll_timeout(
+ || Ok(self.gsp_mem.driver_read_area()),
+ |driver_area: &(&[[u8; 4096]], &[[u8; 4096]])|
!driver_area.0.is_empty(),
+ Delta::from_millis(10),
+ timeout,
+ )?;
+
+ #[allow(clippy::incompatible_msrv)]
+ let (msg_header_slice, slice_1) = driver_area
+ .0
+ .as_flattened()
+ .split_at(size_of::<GspMsgElement>());
+ let msg_header =
GspMsgElement::from_bytes(msg_header_slice).ok_or(EIO)?;
+ if msg_header.length() < size_of::<M>() as u32 {
+ return Err(EIO);
+ }
+
+ let function: MsgFunction = msg_header.function().map_err(|_| {
+ dev_info!(
+ self.dev,
+ "GSP RPC: receive: seq# {}, bad function=0x{:x},
length=0x{:x}\n",
+ msg_header.sequence(),
+ msg_header.function_number(),
+ msg_header.length(),
+ );
+ EIO
+ })?;
+
+ // Log RPC receive with message type decoding
+ dev_info!(
+ self.dev,
+ "GSP RPC: receive: seq# {}, function=0x{:x} ({}),
length=0x{:x}\n",
+ msg_header.sequence(),
+ msg_header.function_number(),
+ function,
+ msg_header.length(),
+ );
+
+ let (cmd_slice, payload_1) = slice_1.split_at(size_of::<M>());
+ #[allow(clippy::incompatible_msrv)]
+ let payload_2 = driver_area.1.as_flattened();
+
+ // Cut the payload slice(s) down to the actual length of the payload.
+ let (cmd_payload_1, cmd_payload_2) + if payload_1.len() >
msg_header.length() as usize - size_of::<M>() {
+ (
+ payload_1
+ .split_at(msg_header.length() as usize -
size_of::<M>())
+ .0,
+ &payload_2[0..0],
+ )
+ } else {
+ (
+ payload_1,
+ payload_2
+ .split_at(msg_header.length() as usize -
size_of::<M>() - payload_1.len())
+ .0,
+ )
+ };
+
+ if Cmdq::calculate_checksum(SBufferIter::new_reader([
+ msg_header.as_bytes(),
+ cmd_slice,
+ cmd_payload_1,
+ cmd_payload_2,
+ ])) != 0
+ {
+ dev_err!(
+ self.dev,
+ "GSP RPC: receive: Call {} - bad checksum",
+ msg_header.sequence()
+ );
+ return Err(EIO);
+ }
+
+ let result = if function == M::FUNCTION {
+ let cmd = M::from_bytes(cmd_slice).ok_or(EINVAL)?;
+ let sbuffer = SBufferIter::new_reader([cmd_payload_1,
cmd_payload_2]);
+ init(cmd, sbuffer)
+ } else {
+ Err(ERANGE)
+ };
+
+ self.gsp_mem
+ .advance_cpu_read_ptr(msg_header.length().div_ceil(GSP_PAGE_SIZE as
u32));
+ result
+ }
+}
diff --git a/drivers/gpu/nova-core/regs.rs b/drivers/gpu/nova-core/regs.rs
index 206dab2e1335..0585699ae951 100644
--- a/drivers/gpu/nova-core/regs.rs
+++ b/drivers/gpu/nova-core/regs.rs
@@ -71,6 +71,10 @@ pub(crate) fn chipset(self) -> Result<Chipset> {
30:30 ecc_mode_enabled as bool;
});
+register!(NV_PGSP_QUEUE_HEAD @ 0x00110c00 {
+ 31:0 address as u32;
+});
+
impl NV_PFB_PRI_MMU_LOCAL_MEMORY_RANGE {
/// Returns the usable framebuffer size, in bytes.
pub(crate) fn usable_fb_size(self) -> u64 {
diff --git a/drivers/gpu/nova-core/sbuffer.rs b/drivers/gpu/nova-core/sbuffer.rs
index d9c412a68bd8..1a27226b65d8 100644
--- a/drivers/gpu/nova-core/sbuffer.rs
+++ b/drivers/gpu/nova-core/sbuffer.rs
@@ -50,7 +50,6 @@ impl<'a, I> SBufferIter<I>
/// let sum: u8 = sbuffer.sum();
/// assert_eq!(sum, 45);
/// ```
- #[expect(unused)]
pub(crate) fn new_reader(slices: impl IntoIterator<IntoIter = I>)
-> Self
where
I: Iterator<Item = &'a [u8]>,
@@ -72,7 +71,6 @@ pub(crate) fn new_reader(slices: impl IntoIterator<IntoIter
= I>) -> Self
/// assert_eq!(buf2, [5, 6, 7, 8, 9]);
///
/// ```
- #[expect(unused)]
pub(crate) fn new_writer(slices: impl IntoIterator<IntoIter = I>)
-> Self
where
I: Iterator<Item = &'a mut [u8]>,
diff --git a/scripts/Makefile.build b/scripts/Makefile.build
index d0ee33a487be..4ac6304332b6 100644
--- a/scripts/Makefile.build
+++ b/scripts/Makefile.build
@@ -317,7 +317,7 @@ $(obj)/%.lst: $(obj)/%.c FORCE
#
# Please see https://github.com/Rust-for-Linux/linux/issues/2 for details on
# the unstable features in use.
-rust_allowed_features :=
asm_const,asm_goto,arbitrary_self_types,lint_reasons,offset_of_nested,raw_ref_op,used_with_arg
+rust_allowed_features :=
asm_const,asm_goto,arbitrary_self_types,lint_reasons,offset_of_nested,raw_ref_op,used_with_arg,slice_flatten
# `--out-dir` is required to avoid temporaries being created by `rustc` in the
# current working directory, which may be not accessible in the out-of-tree
--
2.50.1
Initialise the GSP resource manager arguments (rmargs) which provide
initialisation parameters to the GSP firmware during boot. The rmargs
structure contains arguments to configure the GSP message/command queue
location.
These are mapped for coherent DMA and added to the libos data structure
for access when booting GSP.
Signed-off-by: Alistair Popple <apopple at nvidia.com>
---
Changes for v5:
- Derive Zeroable trait
Changes for v2:
- Rebased on Alex's latest series
---
drivers/gpu/nova-core/gsp.rs | 16 +++
drivers/gpu/nova-core/gsp/cmdq.rs | 24 +++-
drivers/gpu/nova-core/gsp/fw.rs | 60 ++++++++
.../gpu/nova-core/gsp/fw/r570_144/bindings.rs | 132 ------------------
4 files changed, 97 insertions(+), 135 deletions(-)
diff --git a/drivers/gpu/nova-core/gsp.rs b/drivers/gpu/nova-core/gsp.rs
index 1d472c5fad7a..58b595b8badd 100644
--- a/drivers/gpu/nova-core/gsp.rs
+++ b/drivers/gpu/nova-core/gsp.rs
@@ -19,6 +19,7 @@
mod fw;
use fw::LibosMemoryRegionInitArgument;
+use fw::GspArgumentsCached;
pub(crate) mod cmdq;
@@ -36,6 +37,7 @@ pub(crate) struct Gsp {
logintr: LogBuffer,
logrm: LogBuffer,
pub(crate) cmdq: Cmdq,
+ rmargs: CoherentAllocation<GspArgumentsCached>,
}
#[repr(C)]
@@ -117,12 +119,26 @@ pub(crate) fn new(pdev:
&pci::Device<device::Bound>) -> Result<impl PinInit<Self
// Creates its own PTE array.
let cmdq = Cmdq::new(dev)?;
+ let rmargs =
CoherentAllocation::<GspArgumentsCached>::alloc_coherent(
+ dev,
+ 1,
+ GFP_KERNEL | __GFP_ZERO,
+ )?;
+ dma_write!(libos[3] =
LibosMemoryRegionInitArgument::new("RMARGS", &rmargs)?)?;
+
+ dma_write!(
+ rmargs[0] = fw::GspArgumentsCached::new(
+ fw::MessageQueueInitArguments::new(&cmdq),
+ fw::GspSrInitArguments::new()
+ )
+ )?;
Ok(try_pin_init!(Self {
libos,
loginit,
logintr,
logrm,
+ rmargs,
cmdq,
}))
}
diff --git a/drivers/gpu/nova-core/gsp/cmdq.rs
b/drivers/gpu/nova-core/gsp/cmdq.rs
index 3f8cb7a35922..da074a2ed0d9 100644
--- a/drivers/gpu/nova-core/gsp/cmdq.rs
+++ b/drivers/gpu/nova-core/gsp/cmdq.rs
@@ -6,7 +6,7 @@
use kernel::alloc::flags::GFP_KERNEL;
use kernel::device;
-use kernel::dma::CoherentAllocation;
+use kernel::dma::{CoherentAllocation, DmaAddress};
use kernel::dma_write;
use kernel::io::poll::read_poll_timeout;
use kernel::prelude::*;
@@ -247,10 +247,25 @@ pub(crate) struct Cmdq {
dev: ARef<device::Device>,
seq: u32,
gsp_mem: DmaGspMem,
- pub _nr_ptes: u32,
}
impl Cmdq {
+ /// Offset of the data after the PTEs.
+ const POST_PTE_OFFSET: usize = core::mem::offset_of!(GspMem, cpuq);
+
+ /// Offset of command queue ring buffer.
+ pub(crate) const CMDQ_OFFSET: usize = core::mem::offset_of!(GspMem, cpuq)
+ + core::mem::offset_of!(Msgq, msgq)
+ - Self::POST_PTE_OFFSET;
+
+ /// Offset of message queue ring buffer.
+ pub(crate) const STATQ_OFFSET: usize = core::mem::offset_of!(GspMem, gspq)
+ + core::mem::offset_of!(Msgq, msgq)
+ - Self::POST_PTE_OFFSET;
+
+ /// Number of page table entries for the GSP shared region.
+ pub(crate) const NUM_PTES: usize = size_of::<GspMem>() >>
GSP_PAGE_SHIFT;
+
pub(crate) fn new(dev: &device::Device<device::Bound>) ->
Result<Cmdq> {
let gsp_mem = DmaGspMem::new(dev)?;
let nr_ptes = size_of::<GspMem>() >> GSP_PAGE_SHIFT;
@@ -260,7 +275,6 @@ pub(crate) fn new(dev:
&device::Device<device::Bound>) -> Result<Cmdq> {
dev: dev.into(),
seq: 0,
gsp_mem,
- _nr_ptes: nr_ptes as u32,
})
}
@@ -490,4 +504,8 @@ pub(crate) fn receive_msg_from_gsp<M: MessageFromGsp,
R>(
.advance_cpu_read_ptr(msg_header.length().div_ceil(GSP_PAGE_SIZE as
u32));
result
}
+
+ pub(crate) fn dma_handle(&self) -> DmaAddress {
+ self.gsp_mem.0.dma_handle()
+ }
}
diff --git a/drivers/gpu/nova-core/gsp/fw.rs b/drivers/gpu/nova-core/gsp/fw.rs
index a2ce570ddfaf..70abda1c2af8 100644
--- a/drivers/gpu/nova-core/gsp/fw.rs
+++ b/drivers/gpu/nova-core/gsp/fw.rs
@@ -16,6 +16,7 @@
use crate::firmware::gsp::GspFirmware;
use crate::gpu::Chipset;
+use crate::gsp::cmdq::Cmdq;
use crate::gsp::FbLayout;
use crate::gsp::GSP_PAGE_SIZE;
@@ -483,3 +484,62 @@ unsafe impl AsBytes for GspMsgElement {}
// SAFETY: This struct only contains integer types for which all bit patterns
// are valid.
unsafe impl FromBytes for GspMsgElement {}
+
+#[repr(transparent)]
+pub(crate) struct GspArgumentsCached(bindings::GSP_ARGUMENTS_CACHED);
+
+impl GspArgumentsCached {
+ pub(crate) fn new(
+ queue_arguments: MessageQueueInitArguments,
+ sr_arguments: GspSrInitArguments,
+ ) -> Self {
+ Self(bindings::GSP_ARGUMENTS_CACHED {
+ messageQueueInitArguments: queue_arguments.0,
+ srInitArguments: sr_arguments.0,
+ bDmemStack: 1,
+ ..Default::default()
+ })
+ }
+}
+
+impl From<GspArgumentsCached> for bindings::GSP_ARGUMENTS_CACHED {
+ fn from(value: GspArgumentsCached) -> Self {
+ value.0
+ }
+}
+
+// SAFETY: Padding is explicit and will not contain uninitialized data.
+unsafe impl AsBytes for GspArgumentsCached {}
+
+// SAFETY: This struct only contains integer types for which all bit patterns
+// are valid.
+unsafe impl FromBytes for GspArgumentsCached {}
+
+#[repr(transparent)]
+pub(crate) struct
MessageQueueInitArguments(bindings::MESSAGE_QUEUE_INIT_ARGUMENTS);
+
+impl MessageQueueInitArguments {
+ pub(crate) fn new(cmdq: &Cmdq) -> Self {
+ Self(bindings::MESSAGE_QUEUE_INIT_ARGUMENTS {
+ sharedMemPhysAddr: cmdq.dma_handle(),
+ pageTableEntryCount: Cmdq::NUM_PTES as u32,
+ cmdQueueOffset: Cmdq::CMDQ_OFFSET as u64,
+ statQueueOffset: Cmdq::STATQ_OFFSET as u64,
+ ..Default::default()
+ })
+ }
+}
+
+#[repr(transparent)]
+pub(crate) struct GspSrInitArguments(bindings::GSP_SR_INIT_ARGUMENTS);
+
+impl GspSrInitArguments {
+ pub(crate) fn new() -> Self {
+ Self(bindings::GSP_SR_INIT_ARGUMENTS {
+ oldLevel: 0,
+ flags: 0,
+ bInPMTransition: 0,
+ ..Default::default()
+ })
+ }
+}
diff --git a/drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs
b/drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs
index 1251b0c313ce..17fb2392ec3c 100644
--- a/drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs
+++ b/drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs
@@ -321,138 +321,6 @@ fn fmt(&self, fmt: &mut
::core::fmt::Formatter<'_>) -> ::core::fmt::Result {
pub type _bindgen_ty_3 = ffi::c_uint;
#[repr(C)]
#[derive(Debug, Default, Copy, Clone, Zeroable)]
-pub struct DOD_METHOD_DATA {
- pub status: u32_,
- pub acpiIdListLen: u32_,
- pub acpiIdList: [u32_; 16usize],
-}
-#[repr(C)]
-#[derive(Debug, Default, Copy, Clone, Zeroable)]
-pub struct JT_METHOD_DATA {
- pub status: u32_,
- pub jtCaps: u32_,
- pub jtRevId: u16_,
- pub bSBIOSCaps: u8_,
- pub __bindgen_padding_0: u8,
-}
-#[repr(C)]
-#[derive(Debug, Default, Copy, Clone, Zeroable)]
-pub struct MUX_METHOD_DATA_ELEMENT {
- pub acpiId: u32_,
- pub mode: u32_,
- pub status: u32_,
-}
-#[repr(C)]
-#[derive(Debug, Default, Copy, Clone, Zeroable)]
-pub struct MUX_METHOD_DATA {
- pub tableLen: u32_,
- pub acpiIdMuxModeTable: [MUX_METHOD_DATA_ELEMENT; 16usize],
- pub acpiIdMuxPartTable: [MUX_METHOD_DATA_ELEMENT; 16usize],
- pub acpiIdMuxStateTable: [MUX_METHOD_DATA_ELEMENT; 16usize],
-}
-#[repr(C)]
-#[derive(Debug, Default, Copy, Clone, Zeroable)]
-pub struct CAPS_METHOD_DATA {
- pub status: u32_,
- pub optimusCaps: u32_,
-}
-#[repr(C)]
-#[derive(Debug, Default, Copy, Clone, Zeroable)]
-pub struct ACPI_METHOD_DATA {
- pub bValid: u8_,
- pub __bindgen_padding_0: [u8; 3usize],
- pub dodMethodData: DOD_METHOD_DATA,
- pub jtMethodData: JT_METHOD_DATA,
- pub muxMethodData: MUX_METHOD_DATA,
- pub capsMethodData: CAPS_METHOD_DATA,
-}
-#[repr(C)]
-#[derive(Debug, Default, Copy, Clone, Zeroable)]
-pub struct BUSINFO {
- pub deviceID: u16_,
- pub vendorID: u16_,
- pub subdeviceID: u16_,
- pub subvendorID: u16_,
- pub revisionID: u8_,
- pub __bindgen_padding_0: u8,
-}
-#[repr(C)]
-#[derive(Debug, Default, Copy, Clone, Zeroable)]
-pub struct GSP_VF_INFO {
- pub totalVFs: u32_,
- pub firstVFOffset: u32_,
- pub FirstVFBar0Address: u64_,
- pub FirstVFBar1Address: u64_,
- pub FirstVFBar2Address: u64_,
- pub b64bitBar0: u8_,
- pub b64bitBar1: u8_,
- pub b64bitBar2: u8_,
- pub __bindgen_padding_0: [u8; 5usize],
-}
-#[repr(C)]
-#[derive(Debug, Default, Copy, Clone, Zeroable)]
-pub struct GSP_PCIE_CONFIG_REG {
- pub linkCap: u32_,
-}
-#[repr(C)]
-#[derive(Debug, Default, Copy, Clone, Zeroable)]
-pub struct GspSystemInfo {
- pub gpuPhysAddr: u64_,
- pub gpuPhysFbAddr: u64_,
- pub gpuPhysInstAddr: u64_,
- pub gpuPhysIoAddr: u64_,
- pub nvDomainBusDeviceFunc: u64_,
- pub simAccessBufPhysAddr: u64_,
- pub notifyOpSharedSurfacePhysAddr: u64_,
- pub pcieAtomicsOpMask: u64_,
- pub consoleMemSize: u64_,
- pub maxUserVa: u64_,
- pub pciConfigMirrorBase: u32_,
- pub pciConfigMirrorSize: u32_,
- pub PCIDeviceID: u32_,
- pub PCISubDeviceID: u32_,
- pub PCIRevisionID: u32_,
- pub pcieAtomicsCplDeviceCapMask: u32_,
- pub oorArch: u8_,
- pub __bindgen_padding_0: [u8; 7usize],
- pub clPdbProperties: u64_,
- pub Chipset: u32_,
- pub bGpuBehindBridge: u8_,
- pub bFlrSupported: u8_,
- pub b64bBar0Supported: u8_,
- pub bMnocAvailable: u8_,
- pub chipsetL1ssEnable: u32_,
- pub bUpstreamL0sUnsupported: u8_,
- pub bUpstreamL1Unsupported: u8_,
- pub bUpstreamL1PorSupported: u8_,
- pub bUpstreamL1PorMobileOnly: u8_,
- pub bSystemHasMux: u8_,
- pub upstreamAddressValid: u8_,
- pub FHBBusInfo: BUSINFO,
- pub chipsetIDInfo: BUSINFO,
- pub __bindgen_padding_1: [u8; 2usize],
- pub acpiMethodData: ACPI_METHOD_DATA,
- pub hypervisorType: u32_,
- pub bIsPassthru: u8_,
- pub __bindgen_padding_2: [u8; 7usize],
- pub sysTimerOffsetNs: u64_,
- pub gspVFInfo: GSP_VF_INFO,
- pub bIsPrimary: u8_,
- pub isGridBuild: u8_,
- pub __bindgen_padding_3: [u8; 2usize],
- pub pcieConfigReg: GSP_PCIE_CONFIG_REG,
- pub gridBuildCsp: u32_,
- pub bPreserveVideoMemoryAllocations: u8_,
- pub bTdrEventSupported: u8_,
- pub bFeatureStretchVblankCapable: u8_,
- pub bEnableDynamicGranularityPageArrays: u8_,
- pub bClockBoostSupported: u8_,
- pub bRouteDispIntrsToCPU: u8_,
- pub __bindgen_padding_4: [u8; 6usize],
- pub hostPageSize: u64_,
-}
-#[repr(C)]
-#[derive(Debug, Default, Copy, Clone, Zeroable)]
pub struct MESSAGE_QUEUE_INIT_ARGUMENTS {
pub sharedMemPhysAddr: u64_,
pub pageTableEntryCount: u32_,
--
2.50.1
Alistair Popple
2025-Oct-13 06:20 UTC
[PATCH v5 09/14] gpu: nova-core: Add bindings and accessors for GspSystemInfo
Adds bindings and an in-place initialiser for the GspSystemInfo struct.
Signed-off-by: Alistair Popple <apopple at nvidia.com>
---
Changes for v5:
- Derive Zeroable trait
Changes for v4:
- Use `init!` macros
Changes for v3:
- New for v3
---
drivers/gpu/nova-core/gsp/fw.rs | 1 +
drivers/gpu/nova-core/gsp/fw/commands.rs | 51 +++++++
.../gpu/nova-core/gsp/fw/r570_144/bindings.rs | 132 ++++++++++++++++++
3 files changed, 184 insertions(+)
create mode 100644 drivers/gpu/nova-core/gsp/fw/commands.rs
diff --git a/drivers/gpu/nova-core/gsp/fw.rs b/drivers/gpu/nova-core/gsp/fw.rs
index 70abda1c2af8..4563e33e0859 100644
--- a/drivers/gpu/nova-core/gsp/fw.rs
+++ b/drivers/gpu/nova-core/gsp/fw.rs
@@ -1,5 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
+pub(crate) mod commands;
mod r570_144;
// Alias to avoid repeating the version number with every use.
diff --git a/drivers/gpu/nova-core/gsp/fw/commands.rs
b/drivers/gpu/nova-core/gsp/fw/commands.rs
new file mode 100644
index 000000000000..9a524bba1ac4
--- /dev/null
+++ b/drivers/gpu/nova-core/gsp/fw/commands.rs
@@ -0,0 +1,51 @@
+// SPDX-License-Identifier: GPL-2.0
+
+use super::bindings;
+
+use kernel::prelude::*;
+use kernel::transmute::{AsBytes, FromBytes};
+use kernel::{device, pci};
+
+#[repr(transparent)]
+pub(crate) struct GspSystemInfo {
+ inner: bindings::GspSystemInfo,
+}
+
+impl GspSystemInfo {
+ #[allow(non_snake_case)]
+ pub(crate) fn init<'a>(dev: &'a
pci::Device<device::Bound>) -> impl Init<Self, Error> + 'a {
+ type InnerGspSystemInfo = bindings::GspSystemInfo;
+ let init_inner = try_init!(InnerGspSystemInfo {
+ gpuPhysAddr: dev.resource_start(0)?,
+ gpuPhysFbAddr: dev.resource_start(1)?,
+ gpuPhysInstAddr: dev.resource_start(3)?,
+ nvDomainBusDeviceFunc: u64::from(dev.dev_id()),
+
+ // Using TASK_SIZE in r535_gsp_rpc_set_system_info() seems wrong
because
+ // TASK_SIZE is per-task. That's probably a design issue in
GSP-RM though.
+ maxUserVa: (1 << 47) - 4096,
+ pciConfigMirrorBase: 0x088000,
+ pciConfigMirrorSize: 0x001000,
+
+ PCIDeviceID: (u32::from(dev.device_id()) << 16) |
u32::from(dev.vendor_id().as_raw()),
+ PCISubDeviceID: (u32::from(dev.subsystem_device_id()) << 16)
+ | u32::from(dev.subsystem_vendor_id()),
+ PCIRevisionID: u32::from(dev.revision_id()),
+ bIsPrimary: 0,
+ bPreserveVideoMemoryAllocations: 0,
+ ..Zeroable::init_zeroed()
+ });
+
+ try_init!(GspSystemInfo {
+ inner <- init_inner,
+ })
+ }
+}
+
+// SAFETY: These structs don't meet the no-padding requirements of AsBytes
but
+// that is not a problem because they are not used outside the kernel.
+unsafe impl AsBytes for GspSystemInfo {}
+
+// SAFETY: These structs don't meet the no-padding requirements of
FromBytes but
+// that is not a problem because they are not used outside the kernel.
+unsafe impl FromBytes for GspSystemInfo {}
diff --git a/drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs
b/drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs
index 17fb2392ec3c..1251b0c313ce 100644
--- a/drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs
+++ b/drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs
@@ -321,6 +321,138 @@ fn fmt(&self, fmt: &mut
::core::fmt::Formatter<'_>) -> ::core::fmt::Result {
pub type _bindgen_ty_3 = ffi::c_uint;
#[repr(C)]
#[derive(Debug, Default, Copy, Clone, Zeroable)]
+pub struct DOD_METHOD_DATA {
+ pub status: u32_,
+ pub acpiIdListLen: u32_,
+ pub acpiIdList: [u32_; 16usize],
+}
+#[repr(C)]
+#[derive(Debug, Default, Copy, Clone, Zeroable)]
+pub struct JT_METHOD_DATA {
+ pub status: u32_,
+ pub jtCaps: u32_,
+ pub jtRevId: u16_,
+ pub bSBIOSCaps: u8_,
+ pub __bindgen_padding_0: u8,
+}
+#[repr(C)]
+#[derive(Debug, Default, Copy, Clone, Zeroable)]
+pub struct MUX_METHOD_DATA_ELEMENT {
+ pub acpiId: u32_,
+ pub mode: u32_,
+ pub status: u32_,
+}
+#[repr(C)]
+#[derive(Debug, Default, Copy, Clone, Zeroable)]
+pub struct MUX_METHOD_DATA {
+ pub tableLen: u32_,
+ pub acpiIdMuxModeTable: [MUX_METHOD_DATA_ELEMENT; 16usize],
+ pub acpiIdMuxPartTable: [MUX_METHOD_DATA_ELEMENT; 16usize],
+ pub acpiIdMuxStateTable: [MUX_METHOD_DATA_ELEMENT; 16usize],
+}
+#[repr(C)]
+#[derive(Debug, Default, Copy, Clone, Zeroable)]
+pub struct CAPS_METHOD_DATA {
+ pub status: u32_,
+ pub optimusCaps: u32_,
+}
+#[repr(C)]
+#[derive(Debug, Default, Copy, Clone, Zeroable)]
+pub struct ACPI_METHOD_DATA {
+ pub bValid: u8_,
+ pub __bindgen_padding_0: [u8; 3usize],
+ pub dodMethodData: DOD_METHOD_DATA,
+ pub jtMethodData: JT_METHOD_DATA,
+ pub muxMethodData: MUX_METHOD_DATA,
+ pub capsMethodData: CAPS_METHOD_DATA,
+}
+#[repr(C)]
+#[derive(Debug, Default, Copy, Clone, Zeroable)]
+pub struct BUSINFO {
+ pub deviceID: u16_,
+ pub vendorID: u16_,
+ pub subdeviceID: u16_,
+ pub subvendorID: u16_,
+ pub revisionID: u8_,
+ pub __bindgen_padding_0: u8,
+}
+#[repr(C)]
+#[derive(Debug, Default, Copy, Clone, Zeroable)]
+pub struct GSP_VF_INFO {
+ pub totalVFs: u32_,
+ pub firstVFOffset: u32_,
+ pub FirstVFBar0Address: u64_,
+ pub FirstVFBar1Address: u64_,
+ pub FirstVFBar2Address: u64_,
+ pub b64bitBar0: u8_,
+ pub b64bitBar1: u8_,
+ pub b64bitBar2: u8_,
+ pub __bindgen_padding_0: [u8; 5usize],
+}
+#[repr(C)]
+#[derive(Debug, Default, Copy, Clone, Zeroable)]
+pub struct GSP_PCIE_CONFIG_REG {
+ pub linkCap: u32_,
+}
+#[repr(C)]
+#[derive(Debug, Default, Copy, Clone, Zeroable)]
+pub struct GspSystemInfo {
+ pub gpuPhysAddr: u64_,
+ pub gpuPhysFbAddr: u64_,
+ pub gpuPhysInstAddr: u64_,
+ pub gpuPhysIoAddr: u64_,
+ pub nvDomainBusDeviceFunc: u64_,
+ pub simAccessBufPhysAddr: u64_,
+ pub notifyOpSharedSurfacePhysAddr: u64_,
+ pub pcieAtomicsOpMask: u64_,
+ pub consoleMemSize: u64_,
+ pub maxUserVa: u64_,
+ pub pciConfigMirrorBase: u32_,
+ pub pciConfigMirrorSize: u32_,
+ pub PCIDeviceID: u32_,
+ pub PCISubDeviceID: u32_,
+ pub PCIRevisionID: u32_,
+ pub pcieAtomicsCplDeviceCapMask: u32_,
+ pub oorArch: u8_,
+ pub __bindgen_padding_0: [u8; 7usize],
+ pub clPdbProperties: u64_,
+ pub Chipset: u32_,
+ pub bGpuBehindBridge: u8_,
+ pub bFlrSupported: u8_,
+ pub b64bBar0Supported: u8_,
+ pub bMnocAvailable: u8_,
+ pub chipsetL1ssEnable: u32_,
+ pub bUpstreamL0sUnsupported: u8_,
+ pub bUpstreamL1Unsupported: u8_,
+ pub bUpstreamL1PorSupported: u8_,
+ pub bUpstreamL1PorMobileOnly: u8_,
+ pub bSystemHasMux: u8_,
+ pub upstreamAddressValid: u8_,
+ pub FHBBusInfo: BUSINFO,
+ pub chipsetIDInfo: BUSINFO,
+ pub __bindgen_padding_1: [u8; 2usize],
+ pub acpiMethodData: ACPI_METHOD_DATA,
+ pub hypervisorType: u32_,
+ pub bIsPassthru: u8_,
+ pub __bindgen_padding_2: [u8; 7usize],
+ pub sysTimerOffsetNs: u64_,
+ pub gspVFInfo: GSP_VF_INFO,
+ pub bIsPrimary: u8_,
+ pub isGridBuild: u8_,
+ pub __bindgen_padding_3: [u8; 2usize],
+ pub pcieConfigReg: GSP_PCIE_CONFIG_REG,
+ pub gridBuildCsp: u32_,
+ pub bPreserveVideoMemoryAllocations: u8_,
+ pub bTdrEventSupported: u8_,
+ pub bFeatureStretchVblankCapable: u8_,
+ pub bEnableDynamicGranularityPageArrays: u8_,
+ pub bClockBoostSupported: u8_,
+ pub bRouteDispIntrsToCPU: u8_,
+ pub __bindgen_padding_4: [u8; 6usize],
+ pub hostPageSize: u64_,
+}
+#[repr(C)]
+#[derive(Debug, Default, Copy, Clone, Zeroable)]
pub struct MESSAGE_QUEUE_INIT_ARGUMENTS {
pub sharedMemPhysAddr: u64_,
pub pageTableEntryCount: u32_,
--
2.50.1
Alistair Popple
2025-Oct-13 06:20 UTC
[PATCH v5 10/14] gpu: nova-core: Add bindings for the GSP RM registry tables
Adds bindings and constructors for PACKED_REGISTRY_TABLE and
PACKED_REGISTRY_ENTRY structures.
Signed-off-by: Alistair Popple <apopple at nvidia.com>
---
Changes for v5:
- Derive Zeroable trait
Changes for v4:
- Use `init!` macros
- Add comments around only supporting DWORD entry types
Changes for v3:
- New for v3
---
drivers/gpu/nova-core/gsp/fw/commands.rs | 49 +++++++++++++++++++
.../gpu/nova-core/gsp/fw/r570_144/bindings.rs | 16 ++++++
2 files changed, 65 insertions(+)
diff --git a/drivers/gpu/nova-core/gsp/fw/commands.rs
b/drivers/gpu/nova-core/gsp/fw/commands.rs
index 9a524bba1ac4..79a69c6279e8 100644
--- a/drivers/gpu/nova-core/gsp/fw/commands.rs
+++ b/drivers/gpu/nova-core/gsp/fw/commands.rs
@@ -49,3 +49,52 @@ unsafe impl AsBytes for GspSystemInfo {}
// SAFETY: These structs don't meet the no-padding requirements of
FromBytes but
// that is not a problem because they are not used outside the kernel.
unsafe impl FromBytes for GspSystemInfo {}
+
+#[repr(transparent)]
+pub(crate) struct PackedRegistryEntry(bindings::PACKED_REGISTRY_ENTRY);
+
+impl PackedRegistryEntry {
+ pub(crate) fn new(offset: u32, value: u32) -> Self {
+ Self({
+ bindings::PACKED_REGISTRY_ENTRY {
+ nameOffset: offset,
+
+ // We only support DWORD types for now. Support for other types
+ // will come later if required.
+ type_: bindings::REGISTRY_TABLE_ENTRY_TYPE_DWORD as u8,
+ __bindgen_padding_0: Default::default(),
+ data: value,
+ length: 0,
+ }
+ })
+ }
+}
+
+// SAFETY: Padding is explicit and will not contain uninitialized data.
+unsafe impl AsBytes for PackedRegistryEntry {}
+
+#[repr(transparent)]
+pub(crate) struct PackedRegistryTable {
+ inner: bindings::PACKED_REGISTRY_TABLE,
+}
+
+impl PackedRegistryTable {
+ #[allow(non_snake_case)]
+ pub(crate) fn init(num_entries: u32, size: u32) -> impl Init<Self>
{
+ type InnerPackedRegistryTable = bindings::PACKED_REGISTRY_TABLE;
+ let init_inner = init!(InnerPackedRegistryTable {
+ numEntries: num_entries,
+ size,
+ entries: Default::default()
+ });
+
+ init!(PackedRegistryTable { inner <- init_inner })
+ }
+}
+
+// SAFETY: Padding is explicit and will not contain uninitialized data.
+unsafe impl AsBytes for PackedRegistryTable {}
+
+// SAFETY: This struct only contains integer types for which all bit patterns
+// are valid.
+unsafe impl FromBytes for PackedRegistryTable {}
diff --git a/drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs
b/drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs
index 1251b0c313ce..32933874ff97 100644
--- a/drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs
+++ b/drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs
@@ -649,6 +649,22 @@ pub struct LibosMemoryRegionInitArgument {
pub __bindgen_padding_0: [u8; 6usize],
}
#[repr(C)]
+#[derive(Debug, Default, Copy, Clone)]
+pub struct PACKED_REGISTRY_ENTRY {
+ pub nameOffset: u32_,
+ pub type_: u8_,
+ pub __bindgen_padding_0: [u8; 3usize],
+ pub data: u32_,
+ pub length: u32_,
+}
+#[repr(C)]
+#[derive(Debug, Default)]
+pub struct PACKED_REGISTRY_TABLE {
+ pub size: u32_,
+ pub numEntries: u32_,
+ pub entries: __IncompleteArrayField<PACKED_REGISTRY_ENTRY>,
+}
+#[repr(C)]
#[derive(Debug, Default, Copy, Clone, Zeroable)]
pub struct msgqTxHeader {
pub version: u32_,
--
2.50.1
Alistair Popple
2025-Oct-13 06:20 UTC
[PATCH v5 11/14] gpu: nova-core: gsp: Create RM registry and sysinfo commands
Add the RM registry and system information commands that enable the host
driver to configure GSP firmware parameters during initialization.
The RM registry is serialized into a packed format and sent via the
command queue. For now only two parameters which are required to boot
GSP are hardcoded. In future a kernel module parameter will be added to
enable other parameters to be added.
Also add the system info command, which provides required hardware
information to the GSP. These commands use the GSP command queue
infrastructure to issue commands to the GSP which is read during GSP
boot.
Signed-off-by: Alistair Popple <apopple at nvidia.com>
Reviewed-by: Lyude Paul <lyude at redhat.com>
---
Changes for v4:
- Use `init!` macros
- Update to use send_gsp_command_with_payload() for the registry
- Add RMDevidCheckIgnore registry setting (thanks Timur)
Changes for v3:
- Use MsgFunction enum
- Rename GspCmdq to Cmdq
- Rename GspCommandToGsp to CommandToGsp
- Rename GspMessageFromGsp to MessageFromGsp
- Split bindings into separate patch
Changes for v2:
- Rebased on Alex's latest tree
---
drivers/gpu/nova-core/gsp.rs | 1 +
drivers/gpu/nova-core/gsp/boot.rs | 6 +-
drivers/gpu/nova-core/gsp/cmdq.rs | 2 -
drivers/gpu/nova-core/gsp/commands.rs | 115 ++++++++++++++++++++++++++
drivers/gpu/nova-core/sbuffer.rs | 1 -
5 files changed, 121 insertions(+), 4 deletions(-)
create mode 100644 drivers/gpu/nova-core/gsp/commands.rs
diff --git a/drivers/gpu/nova-core/gsp.rs b/drivers/gpu/nova-core/gsp.rs
index 58b595b8badd..0f88725266bb 100644
--- a/drivers/gpu/nova-core/gsp.rs
+++ b/drivers/gpu/nova-core/gsp.rs
@@ -22,6 +22,7 @@
use fw::GspArgumentsCached;
pub(crate) mod cmdq;
+pub(crate) mod commands;
pub(crate) const GSP_PAGE_SHIFT: usize = 12;
pub(crate) const GSP_PAGE_SIZE: usize = 1 << GSP_PAGE_SHIFT;
diff --git a/drivers/gpu/nova-core/gsp/boot.rs
b/drivers/gpu/nova-core/gsp/boot.rs
index 1d2448331d7a..0b306313ec53 100644
--- a/drivers/gpu/nova-core/gsp/boot.rs
+++ b/drivers/gpu/nova-core/gsp/boot.rs
@@ -16,6 +16,7 @@
FIRMWARE_VERSION,
};
use crate::gpu::Chipset;
+use crate::gsp::commands::{build_registry, set_system_info};
use crate::gsp::GspFwWprMeta;
use crate::regs;
use crate::vbios::Vbios;
@@ -105,7 +106,7 @@ fn run_fwsec_frts(
///
/// Upon return, the GSP is up and running, and its runtime object given as
return value.
pub(crate) fn boot(
- self: Pin<&mut Self>,
+ mut self: Pin<&mut Self>,
pdev: &pci::Device<device::Bound>,
bar: &Bar0,
chipset: Chipset,
@@ -139,6 +140,9 @@ pub(crate) fn boot(
CoherentAllocation::<GspFwWprMeta>::alloc_coherent(dev, 1,
GFP_KERNEL | __GFP_ZERO)?;
dma_write!(wpr_meta[0] = GspFwWprMeta::new(&gsp_fw,
&fb_layout))?;
+ set_system_info(&mut self.cmdq, pdev, bar)?;
+ build_registry(&mut self.cmdq, bar)?;
+
Ok(())
}
}
diff --git a/drivers/gpu/nova-core/gsp/cmdq.rs
b/drivers/gpu/nova-core/gsp/cmdq.rs
index da074a2ed0d9..0cace0dacf13 100644
--- a/drivers/gpu/nova-core/gsp/cmdq.rs
+++ b/drivers/gpu/nova-core/gsp/cmdq.rs
@@ -292,7 +292,6 @@ fn notify_gsp(bar: &Bar0) {
NV_PGSP_QUEUE_HEAD::default().set_address(0).write(bar);
}
- #[expect(unused)]
pub(crate) fn send_gsp_command<M, E>(&mut self, bar: &Bar0,
init: impl Init<M, E>) -> Result
where
M: CommandToGsp,
@@ -345,7 +344,6 @@ struct FullCommand<M> {
Ok(())
}
- #[expect(unused)]
pub(crate) fn send_gsp_command_with_payload<M, E>(
&mut self,
bar: &Bar0,
diff --git a/drivers/gpu/nova-core/gsp/commands.rs
b/drivers/gpu/nova-core/gsp/commands.rs
new file mode 100644
index 000000000000..9fcf37984314
--- /dev/null
+++ b/drivers/gpu/nova-core/gsp/commands.rs
@@ -0,0 +1,115 @@
+// SPDX-License-Identifier: GPL-2.0
+
+use kernel::build_assert;
+use kernel::device;
+use kernel::pci;
+use kernel::prelude::*;
+use kernel::transmute::AsBytes;
+
+use super::fw::commands::*;
+use super::fw::MsgFunction;
+use crate::driver::Bar0;
+use crate::gsp::cmdq::Cmdq;
+use crate::gsp::cmdq::{CommandToGsp, CommandToGspWithPayload};
+use crate::gsp::GSP_PAGE_SIZE;
+use crate::sbuffer::SBufferIter;
+
+// For now we hard-code the registry entries. Future work will allow others to
+// be added as module parameters.
+const GSP_REGISTRY_NUM_ENTRIES: usize = 3;
+pub(crate) struct RegistryEntry {
+ key: &'static str,
+ value: u32,
+}
+
+pub(crate) struct RegistryTable {
+ entries: [RegistryEntry; GSP_REGISTRY_NUM_ENTRIES],
+}
+
+impl CommandToGsp for PackedRegistryTable {
+ const FUNCTION: MsgFunction = MsgFunction::SetRegistry;
+}
+impl CommandToGspWithPayload for PackedRegistryTable {}
+
+impl RegistryTable {
+ fn write_payload<'a, I: Iterator<Item = &'a mut
[u8]>>(
+ &self,
+ mut sbuffer: SBufferIter<I>,
+ ) -> Result {
+ let string_data_start_offset = size_of::<PackedRegistryTable>()
+ + GSP_REGISTRY_NUM_ENTRIES *
size_of::<PackedRegistryEntry>();
+
+ // Array for string data.
+ let mut string_data = KVec::new();
+
+ for entry in self.entries.iter().take(GSP_REGISTRY_NUM_ENTRIES) {
+ sbuffer.write_all(
+ PackedRegistryEntry::new(
+ (string_data_start_offset + string_data.len()) as u32,
+ entry.value,
+ )
+ .as_bytes(),
+ )?;
+
+ let key_bytes = entry.key.as_bytes();
+ string_data.extend_from_slice(key_bytes, GFP_KERNEL)?;
+ string_data.push(0, GFP_KERNEL)?;
+ }
+
+ sbuffer.write_all(string_data.as_slice())
+ }
+
+ fn size(&self) -> usize {
+ let mut key_size = 0;
+ for i in 0..GSP_REGISTRY_NUM_ENTRIES {
+ key_size += self.entries[i].key.len() + 1; // +1 for NULL
terminator
+ }
+ GSP_REGISTRY_NUM_ENTRIES * size_of::<PackedRegistryEntry>() +
key_size
+ }
+}
+
+pub(crate) fn build_registry(cmdq: &mut Cmdq, bar: &Bar0) -> Result
{
+ let registry = RegistryTable {
+ entries: [
+ // RMSecBusResetEnable - enables PCI secondary bus reset
+ RegistryEntry {
+ key: "RMSecBusResetEnable",
+ value: 1,
+ },
+ // RMForcePcieConfigSave - forces GSP-RM to preserve PCI
+ // configuration registers on any PCI reset.
+ RegistryEntry {
+ key: "RMForcePcieConfigSave",
+ value: 1,
+ },
+ // RMDevidCheckIgnore - allows GSP-RM to boot even if the PCI dev
ID
+ // is not found in the internal product name database.
+ RegistryEntry {
+ key: "RMDevidCheckIgnore",
+ value: 1,
+ },
+ ],
+ };
+
+ cmdq.send_gsp_command_with_payload(
+ bar,
+ registry.size(),
+ PackedRegistryTable::init(GSP_REGISTRY_NUM_ENTRIES as u32,
registry.size() as u32),
+ |sbuffer| registry.write_payload(sbuffer),
+ )
+}
+
+impl CommandToGsp for GspSystemInfo {
+ const FUNCTION: MsgFunction = MsgFunction::GspSetSystemInfo;
+}
+
+pub(crate) fn set_system_info(
+ cmdq: &mut Cmdq,
+ dev: &pci::Device<device::Bound>,
+ bar: &Bar0,
+) -> Result {
+ build_assert!(size_of::<GspSystemInfo>() < GSP_PAGE_SIZE);
+ cmdq.send_gsp_command(bar, GspSystemInfo::init(dev))?;
+
+ Ok(())
+}
diff --git a/drivers/gpu/nova-core/sbuffer.rs b/drivers/gpu/nova-core/sbuffer.rs
index 1a27226b65d8..e88fdab990b1 100644
--- a/drivers/gpu/nova-core/sbuffer.rs
+++ b/drivers/gpu/nova-core/sbuffer.rs
@@ -186,7 +186,6 @@ fn get_slice_mut(&mut self, len: usize) ->
Option<&'a mut [u8]> {
/// Ideally we would implement `Write`, but it is not available in `core`.
/// So mimic `std::io::Write::write_all`.
- #[expect(unused)]
pub(crate) fn write_all(&mut self, mut src: &[u8]) -> Result {
while !src.is_empty() {
match self.get_slice_mut(src.len()) {
--
2.50.1
Alistair Popple
2025-Oct-13 06:20 UTC
[PATCH v5 12/14] nova-core: falcon: Add support to check if RISC-V is active
From: Joel Fernandes <joelagnelf at nvidia.com>
Add definition for RISCV_CPUCTL register and use it in a new falcon API
to check if the RISC-V core of a Falcon is active. It is required by
the sequencer to know if the GSP's RISCV processor is active.
Signed-off-by: Joel Fernandes <joelagnelf at nvidia.com>
Reviewed-by: Lyude Paul <lyude at redhat.com>
---
Changes for v4:
- Return bool instead of Result<bool> from is_riscv_active() as it
can't fail (thanks Timur).
- Update register definitions to correct Falcon
- Switch register definition order
---
drivers/gpu/nova-core/falcon.rs | 9 +++++++++
drivers/gpu/nova-core/regs.rs | 7 ++++++-
2 files changed, 15 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/nova-core/falcon.rs b/drivers/gpu/nova-core/falcon.rs
index 734ac0fbfb49..185ed6d1cfb8 100644
--- a/drivers/gpu/nova-core/falcon.rs
+++ b/drivers/gpu/nova-core/falcon.rs
@@ -506,4 +506,13 @@ pub(crate) fn signature_reg_fuse_version(
self.hal
.signature_reg_fuse_version(self, bar, engine_id_mask, ucode_id)
}
+
+ /// Check if the RISC-V core is active.
+ ///
+ /// Returns `true` if the RISC-V core is active, `false` otherwise.
+ #[expect(unused)]
+ pub(crate) fn is_riscv_active(&self, bar: &Bar0) -> bool {
+ let cpuctl = regs::NV_PRISCV_RISCV_CPUCTL::read(bar, &E::ID);
+ cpuctl.active_stat()
+ }
}
diff --git a/drivers/gpu/nova-core/regs.rs b/drivers/gpu/nova-core/regs.rs
index 0585699ae951..3bd1bddb16bb 100644
--- a/drivers/gpu/nova-core/regs.rs
+++ b/drivers/gpu/nova-core/regs.rs
@@ -324,7 +324,12 @@ pub(crate) fn mem_scrubbing_done(self) -> bool {
// PRISCV
-register!(NV_PRISCV_RISCV_BCR_CTRL @ PFalconBase[0x00001668] {
+register!(NV_PRISCV_RISCV_CPUCTL @ PFalcon2Base[0x00000388] {
+ 0:0 halted as bool;
+ 7:7 active_stat as bool;
+});
+
+register!(NV_PRISCV_RISCV_BCR_CTRL @ PFalcon2Base[0x00000668] {
0:0 valid as bool;
4:4 core_select as bool => PeregrineCoreSelect;
8:8 br_fetch as bool;
--
2.50.1
Alistair Popple
2025-Oct-13 06:20 UTC
[PATCH v5 13/14] nova-core: falcon: Add support to write firmware version
From: Joel Fernandes <joelagnelf at nvidia.com>
This will be needed by both the GSP boot code as well as GSP resume code
in the sequencer.
Signed-off-by: Joel Fernandes <joelagnelf at nvidia.com>
Reviewed-by: Lyude Paul <lyude at redhat.com>
---
Changes for v5:
- Make it infallible
---
drivers/gpu/nova-core/falcon.rs | 8 ++++++++
drivers/gpu/nova-core/regs.rs | 6 ++++++
2 files changed, 14 insertions(+)
diff --git a/drivers/gpu/nova-core/falcon.rs b/drivers/gpu/nova-core/falcon.rs
index 185ed6d1cfb8..c871fd061987 100644
--- a/drivers/gpu/nova-core/falcon.rs
+++ b/drivers/gpu/nova-core/falcon.rs
@@ -515,4 +515,12 @@ pub(crate) fn is_riscv_active(&self, bar: &Bar0)
-> bool {
let cpuctl = regs::NV_PRISCV_RISCV_CPUCTL::read(bar, &E::ID);
cpuctl.active_stat()
}
+
+ /// Write the application version to the OS register.
+ #[expect(dead_code)]
+ pub(crate) fn write_os_version(&self, bar: &Bar0, app_version: u32)
{
+ regs::NV_PFALCON_FALCON_OS::default()
+ .set_value(app_version)
+ .write(bar, &E::ID);
+ }
}
diff --git a/drivers/gpu/nova-core/regs.rs b/drivers/gpu/nova-core/regs.rs
index 3bd1bddb16bb..6eda5c44c599 100644
--- a/drivers/gpu/nova-core/regs.rs
+++ b/drivers/gpu/nova-core/regs.rs
@@ -215,6 +215,12 @@ pub(crate) fn vga_workspace_addr(self) ->
Option<u64> {
31:0 value as u32;
});
+// Used to store version information about the firmware running
+// on the Falcon processor.
+register!(NV_PFALCON_FALCON_OS @ PFalconBase[0x00000080] {
+ 31:0 value as u32;
+});
+
register!(NV_PFALCON_FALCON_RM @ PFalconBase[0x00000084] {
31:0 value as u32;
});
--
2.50.1
Boot the GSP to the RISC-V active state. Completing the boot requires
running the CPU sequencer which will be added in a future commit.
Signed-off-by: Alistair Popple <apopple at nvidia.com>
Reviewed-by: Lyude Paul <lyude at redhat.com>
---
Changes for v4:
- Switch wait_on to read_poll_timeout
Changes for v3:
- Fixed minor nit from John
- Added booter load error thanks to Timur's suggestion
Changes for v2:
- Rebased on Alex's latest tree
---
drivers/gpu/nova-core/falcon.rs | 2 -
drivers/gpu/nova-core/firmware/riscv.rs | 3 +-
drivers/gpu/nova-core/gsp/boot.rs | 63 ++++++++++++++++++++++++-
3 files changed, 63 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/nova-core/falcon.rs b/drivers/gpu/nova-core/falcon.rs
index c871fd061987..98ad75b93ea2 100644
--- a/drivers/gpu/nova-core/falcon.rs
+++ b/drivers/gpu/nova-core/falcon.rs
@@ -510,14 +510,12 @@ pub(crate) fn signature_reg_fuse_version(
/// Check if the RISC-V core is active.
///
/// Returns `true` if the RISC-V core is active, `false` otherwise.
- #[expect(unused)]
pub(crate) fn is_riscv_active(&self, bar: &Bar0) -> bool {
let cpuctl = regs::NV_PRISCV_RISCV_CPUCTL::read(bar, &E::ID);
cpuctl.active_stat()
}
/// Write the application version to the OS register.
- #[expect(dead_code)]
pub(crate) fn write_os_version(&self, bar: &Bar0, app_version: u32)
{
regs::NV_PFALCON_FALCON_OS::default()
.set_value(app_version)
diff --git a/drivers/gpu/nova-core/firmware/riscv.rs
b/drivers/gpu/nova-core/firmware/riscv.rs
index 115b5f5355a1..98be14263366 100644
--- a/drivers/gpu/nova-core/firmware/riscv.rs
+++ b/drivers/gpu/nova-core/firmware/riscv.rs
@@ -52,7 +52,6 @@ fn new(bin_fw: &BinFirmware<'_>) ->
Result<Self> {
}
/// A parsed firmware for a RISC-V core, ready to be loaded and run.
-#[expect(unused)]
pub(crate) struct RiscvFirmware {
/// Offset at which the code starts in the firmware image.
pub(crate) code_offset: u32,
@@ -61,7 +60,7 @@ pub(crate) struct RiscvFirmware {
/// Offset at which the manifest starts in the firmware image.
pub(crate) manifest_offset: u32,
/// Application version.
- app_version: u32,
+ pub app_version: u32,
/// Device-mapped firmware image.
pub ucode: DmaObject,
}
diff --git a/drivers/gpu/nova-core/gsp/boot.rs
b/drivers/gpu/nova-core/gsp/boot.rs
index 0b306313ec53..649c758eda70 100644
--- a/drivers/gpu/nova-core/gsp/boot.rs
+++ b/drivers/gpu/nova-core/gsp/boot.rs
@@ -3,8 +3,10 @@
use kernel::device;
use kernel::dma::CoherentAllocation;
use kernel::dma_write;
+use kernel::io::poll::read_poll_timeout;
use kernel::pci;
use kernel::prelude::*;
+use kernel::time::Delta;
use crate::driver::Bar0;
use crate::falcon::{gsp::Gsp, sec2::Sec2, Falcon};
@@ -127,7 +129,7 @@ pub(crate) fn boot(
Self::run_fwsec_frts(dev, gsp_falcon, bar, &bios, &fb_layout)?;
- let _booter_loader = BooterFirmware::new(
+ let booter_loader = BooterFirmware::new(
dev,
BooterKind::Loader,
chipset,
@@ -143,6 +145,65 @@ pub(crate) fn boot(
set_system_info(&mut self.cmdq, pdev, bar)?;
build_registry(&mut self.cmdq, bar)?;
+ gsp_falcon.reset(bar)?;
+ let libos_handle = self.libos.dma_handle();
+ let (mbox0, mbox1) = gsp_falcon.boot(
+ bar,
+ Some(libos_handle as u32),
+ Some((libos_handle >> 32) as u32),
+ )?;
+ dev_dbg!(
+ pdev.as_ref(),
+ "GSP MBOX0: {:#x}, MBOX1: {:#x}\n",
+ mbox0,
+ mbox1
+ );
+
+ dev_dbg!(
+ pdev.as_ref(),
+ "Using SEC2 to load and run the booter_load
firmware...\n"
+ );
+
+ sec2_falcon.reset(bar)?;
+ sec2_falcon.dma_load(bar, &booter_loader)?;
+ let wpr_handle = wpr_meta.dma_handle();
+ let (mbox0, mbox1) = sec2_falcon.boot(
+ bar,
+ Some(wpr_handle as u32),
+ Some((wpr_handle >> 32) as u32),
+ )?;
+ dev_dbg!(
+ pdev.as_ref(),
+ "SEC2 MBOX0: {:#x}, MBOX1{:#x}\n",
+ mbox0,
+ mbox1
+ );
+
+ if mbox0 != 0 {
+ dev_err!(
+ pdev.as_ref(),
+ "Booter-load failed with error {:#x}\n",
+ mbox0
+ );
+ return Err(ENODEV);
+ }
+
+ gsp_falcon.write_os_version(bar, gsp_fw.bootloader.app_version);
+
+ // Poll for RISC-V to become active before running sequencer
+ read_poll_timeout(
+ || Ok(gsp_falcon.is_riscv_active(bar)),
+ |val: &bool| *val,
+ Delta::from_millis(10),
+ Delta::from_secs(5),
+ )?;
+
+ dev_dbg!(
+ pdev.as_ref(),
+ "RISC-V active? {}\n",
+ gsp_falcon.is_riscv_active(bar),
+ );
+
Ok(())
}
}
--
2.50.1