Greg KH
2024-Sep-26 09:20 UTC
[RFC 01/29] nvkm/vgpu: introduce NVIDIA vGPU support prelude
On Sun, Sep 22, 2024 at 05:49:23AM -0700, Zhi Wang wrote:> NVIDIA GPU virtualization is a technology that allows multiple virtual > machines (VMs) to share the power of a single GPU, enabling greater > flexibility, efficiency, and cost-effectiveness in data centers and cloud > environments. > > The first step of supporting NVIDIA vGPU in nvkm is to introduce the > necessary vGPU data structures and functions to hook into the > (de)initialization path of nvkm. > > Introduce NVIDIA vGPU data structures and functions hooking into the > the (de)initialization path of nvkm and support the following patches. > > Cc: Neo Jia <cjia at nvidia.com> > Cc: Surath Mitra <smitra at nvidia.com> > Signed-off-by: Zhi Wang <zhiw at nvidia.com>Some minor comments that are a hint you all aren't running checkpatch on your code...> --- /dev/null > +++ b/drivers/gpu/drm/nouveau/include/nvkm/vgpu_mgr/vgpu_mgr.h > @@ -0,0 +1,17 @@ > +/* SPDX-License-Identifier: MIT */Wait, what? Why? Ick. You all also forgot the copyright line :(> --- /dev/null > +++ b/drivers/gpu/drm/nouveau/nvkm/vgpu_mgr/vgpu_mgr.c > @@ -0,0 +1,76 @@ > +/* SPDX-License-Identifier: MIT */ > +#include <core/device.h> > +#include <core/pci.h> > +#include <vgpu_mgr/vgpu_mgr.h> > + > +static bool support_vgpu_mgr = false;A global variable for the whole system? Are you sure that will work well over time? Why isn't this a per-device thing?> +module_param_named(support_vgpu_mgr, support_vgpu_mgr, bool, 0400);This is not the 1990's, please never add new module parameters, use per-device variables. And no documentation? That's not ok either even if you did want to have this.> +static inline struct pci_dev *nvkm_to_pdev(struct nvkm_device *device) > +{ > + struct nvkm_device_pci *pci = container_of(device, typeof(*pci), > + device); > + > + return pci->pdev; > +} > + > +/** > + * nvkm_vgpu_mgr_is_supported - check if a platform support vGPU > + * @device: the nvkm_device pointer > + * > + * Returns: true on supported platform which is newer than ADA Lovelace > + * with SRIOV support. > + */ > +bool nvkm_vgpu_mgr_is_supported(struct nvkm_device *device) > +{ > + struct pci_dev *pdev = nvkm_to_pdev(device); > + > + if (!support_vgpu_mgr) > + return false; > + > + return device->card_type == AD100 && pci_sriov_get_totalvfs(pdev);checkpatch please. And "AD100" is an odd #define, as you know.> +} > + > +/** > + * nvkm_vgpu_mgr_is_enabled - check if vGPU support is enabled on a PF > + * @device: the nvkm_device pointer > + * > + * Returns: true if vGPU enabled. > + */ > +bool nvkm_vgpu_mgr_is_enabled(struct nvkm_device *device) > +{ > + return device->vgpu_mgr.enabled;What happens if this changes right after you look at it?> +} > + > +/** > + * nvkm_vgpu_mgr_init - Initialize the vGPU manager support > + * @device: the nvkm_device pointer > + * > + * Returns: 0 on success, -ENODEV on platforms that are not supported. > + */ > +int nvkm_vgpu_mgr_init(struct nvkm_device *device) > +{ > + struct nvkm_vgpu_mgr *vgpu_mgr = &device->vgpu_mgr; > + > + if (!nvkm_vgpu_mgr_is_supported(device)) > + return -ENODEV; > + > + vgpu_mgr->nvkm_dev = device; > + vgpu_mgr->enabled = true; > + > + pci_info(nvkm_to_pdev(device), > + "NVIDIA vGPU mananger support is enabled.\n");When drivers work properly, they are quiet. Why can't you see this all in the sysfs tree instead to know if support is there or not? You all are properly tieing in your "sub driver" logic to the driver model, right? (hint, I don't think so as it looks like that isn't happening, but I could be missing it...) thanks, greg k-h
Zhi Wang
2024-Oct-14 09:59 UTC
[RFC 01/29] nvkm/vgpu: introduce NVIDIA vGPU support prelude
On 26/09/2024 12.20, Greg KH wrote:> External email: Use caution opening links or attachments > > > On Sun, Sep 22, 2024 at 05:49:23AM -0700, Zhi Wang wrote: >> NVIDIA GPU virtualization is a technology that allows multiple virtual >> machines (VMs) to share the power of a single GPU, enabling greater >> flexibility, efficiency, and cost-effectiveness in data centers and cloud >> environments. >> >> The first step of supporting NVIDIA vGPU in nvkm is to introduce the >> necessary vGPU data structures and functions to hook into the >> (de)initialization path of nvkm. >> >> Introduce NVIDIA vGPU data structures and functions hooking into the >> the (de)initialization path of nvkm and support the following patches. >> >> Cc: Neo Jia <cjia at nvidia.com> >> Cc: Surath Mitra <smitra at nvidia.com> >> Signed-off-by: Zhi Wang <zhiw at nvidia.com> > > Some minor comments that are a hint you all aren't running checkpatch on > your code... > >> --- /dev/null >> +++ b/drivers/gpu/drm/nouveau/include/nvkm/vgpu_mgr/vgpu_mgr.h >> @@ -0,0 +1,17 @@ >> +/* SPDX-License-Identifier: MIT */ > > Wait, what? Why? Ick. You all also forgot the copyright line :( >Will fix it accordingly. Back to the reason, I am trying to follow the majority in the nouveau since this is the change of nouveau. What's your guidelines about those already in the code? inno at inno-linux:~/vgpu-linux-rfc/drivers/gpu/drm/nouveau$ grep -A 3 -R ": MIT" * dispnv04/disp.h:/* SPDX-License-Identifier: MIT */ dispnv04/disp.h-#ifndef __NV04_DISPLAY_H__ dispnv04/disp.h-#define __NV04_DISPLAY_H__ dispnv04/disp.h-#include <subdev/bios.h> -- dispnv04/cursor.c:// SPDX-License-Identifier: MIT dispnv04/cursor.c-#include <drm/drm_mode.h> dispnv04/cursor.c-#include "nouveau_drv.h" dispnv04/cursor.c-#include "nouveau_reg.h" -- dispnv04/Kbuild:# SPDX-License-Identifier: MIT dispnv04/Kbuild-nouveau-y += dispnv04/arb.o dispnv04/Kbuild-nouveau-y += dispnv04/crtc.o dispnv04/Kbuild-nouveau-y += dispnv04/cursor.o -- dispnv50/crc.h:/* SPDX-License-Identifier: MIT */ dispnv50/crc.h-#ifndef __NV50_CRC_H__ dispnv50/crc.h-#define __NV50_CRC_H__ dispnv50/crc.h- -- dispnv50/handles.h:/* SPDX-License-Identifier: MIT */ dispnv50/handles.h-#ifndef __NV50_KMS_HANDLES_H__ dispnv50/handles.h-#define __NV50_KMS_HANDLES_H__ dispnv50/handles.h- -- dispnv50/crcc37d.h:/* SPDX-License-Identifier: MIT */ dispnv50/crcc37d.h- dispnv50/crcc37d.h-#ifndef __CRCC37D_H__ dispnv50/crcc37d.h-#define __CRCC37D_H__ -- dispnv50/Kbuild:# SPDX-License-Identifier: MIT dispnv50/Kbuild-nouveau-y += dispnv50/disp.o dispnv50/Kbuild-nouveau-y += dispnv50/lut.o>> --- /dev/null >> +++ b/drivers/gpu/drm/nouveau/nvkm/vgpu_mgr/vgpu_mgr.c >> @@ -0,0 +1,76 @@ >> +/* SPDX-License-Identifier: MIT */ >> +#include <core/device.h> >> +#include <core/pci.h> >> +#include <vgpu_mgr/vgpu_mgr.h> >> + >> +static bool support_vgpu_mgr = false; > > A global variable for the whole system? Are you sure that will work > well over time? Why isn't this a per-device thing? > >> +module_param_named(support_vgpu_mgr, support_vgpu_mgr, bool, 0400); > > This is not the 1990's, please never add new module parameters, use > per-device variables. And no documentation? That's not ok either even > if you did want to have this. >Thanks for the comments. I am most collecting people opinion on the means of enabling/disabling the vGPU, via kernel parameter or not is just one of the options. If it is chosen, having a global kernel parameter is not expected to be in the !RFC patch.>> +static inline struct pci_dev *nvkm_to_pdev(struct nvkm_device *device) >> +{ >> + struct nvkm_device_pci *pci = container_of(device, typeof(*pci), >> + device); >> + >> + return pci->pdev; >> +} >> + >> +/** >> + * nvkm_vgpu_mgr_is_supported - check if a platform support vGPU >> + * @device: the nvkm_device pointer >> + * >> + * Returns: true on supported platform which is newer than ADA Lovelace >> + * with SRIOV support. >> + */ >> +bool nvkm_vgpu_mgr_is_supported(struct nvkm_device *device) >> +{ >> + struct pci_dev *pdev = nvkm_to_pdev(device); >> + >> + if (!support_vgpu_mgr) >> + return false; >> + >> + return device->card_type == AD100 && pci_sriov_get_totalvfs(pdev); > > checkpatch please. >I did before sending it, but it doesn't complain this line. My command line $ scripts/checkpatch.pl [this patch]> And "AD100" is an odd #define, as you know.I agree and people commented about it in the internal review. But it is from the nouveau driver and it has been used in many other places in nouveau driver. What would be your guidelines in this situation?> >> +} >> + >> +/** >> + * nvkm_vgpu_mgr_is_enabled - check if vGPU support is enabled on a PF >> + * @device: the nvkm_device pointer >> + * >> + * Returns: true if vGPU enabled. >> + */ >> +bool nvkm_vgpu_mgr_is_enabled(struct nvkm_device *device) >> +{ >> + return device->vgpu_mgr.enabled; > > What happens if this changes right after you look at it? >Nice catch. Will fix it.> >> +} >> + >> +/** >> + * nvkm_vgpu_mgr_init - Initialize the vGPU manager support >> + * @device: the nvkm_device pointer >> + * >> + * Returns: 0 on success, -ENODEV on platforms that are not supported. >> + */ >> +int nvkm_vgpu_mgr_init(struct nvkm_device *device) >> +{ >> + struct nvkm_vgpu_mgr *vgpu_mgr = &device->vgpu_mgr; >> + >> + if (!nvkm_vgpu_mgr_is_supported(device)) >> + return -ENODEV; >> + >> + vgpu_mgr->nvkm_dev = device; >> + vgpu_mgr->enabled = true; >> + >> + pci_info(nvkm_to_pdev(device), >> + "NVIDIA vGPU mananger support is enabled.\n"); > > When drivers work properly, they are quiet. >I totally understand this rule that driver should be quiet. But this is not the same as "driver is loaded". This is a feature reporting like many others My concern is as nouveau is a kernel driver, when a user mets a kernel panic and offers a dmesg to analyze, it would be at least nice to know if the vGPU feature is turned on or not. Sysfs is doable, but it helps in different scenarios.> Why can't you see this all in the sysfs tree instead to know if support > is there or not? You all are properly tieing in your "sub driver" logic > to the driver model, right? (hint, I don't think so as it looks like > that isn't happening, but I could be missing it...) > > thanks, > > greg k-h