Alexandre Courbot
2016-Nov-21 08:28 UTC
[Nouveau] [PATCH v4 0/33] Secure Boot refactoring / signed PMU firmware support for GM20B
This revision includes initial signed PMU firmware support for GM20B (Tegra X1). This PMU code will also be used as a basis for dGPU signed PMU firmware support. With the PMU code, the refactoring of secure boot should also make more sense. ACR (secure boot) support is now separated by the driver version it originates from. This separation allows to run any version of the ACR on any chip, although in practice only one version should ever be released for any given chip. But since ACR only changes slightly from version to version, we just have to program the difference against the previous version in order to support a new one. The same applies to the PMU firmware, although with a different versioning scheme. The firmware version number is encoded in a field of the descriptor file. This version is arbitrary, but can be matched to a given set of message formats. Again, much code can be reused between versions. The PMU code for GM20B is available from branch 'gm20b' of https://github.com/Gnurou/linux-firmware, which will be pushed to upstream unless issues about the PMU files naming scheme are raised. Note that the present code will still work if the PMU firmware is not present, meaning compatibility with non-updated user space is maintained. Changes since v1: - Use NVIDIA driver versions to differenciate the ACR structures instead of arbitrary numbers - Add abstractions to firmware loading functions - Optimized set of abstractions - Removed some more code Changes since v2: - Fix naming of new structures/functions Changes since v3: - Add PMU support code for GM20B. Alexandre Courbot (31): core: constify nv*_printk macros core: add falcon library secboot: use falcon library's IMEM/DMEM loading functions secboot: rename init() hook to oneinit() secboot: remove fixup_hs_desc hook secboot: add low-secure firmware hooks secboot: generate HS BL descriptor in hook secboot: reorganize into more files secboot: add LS flags to LS func structure secboot: split reset function secboot: disable falcon interrupts before running secboot: remove unneeded ls_ucode_img member secboot: remove ls_ucode_mgr secboot: abstract LS firmware loading functions secboot: safer zeroing of BL descriptors secboot: add missing fields to BL structure secboot: set default error value in error register secboot: fix WPR descriptor generation secboot: add lazy-bootstrap flag secboot: store falcon's DMEM size in secboot structure secboot: clear halt interrupt after ACR is run core: add falcon DMEM read function pmu: add nvkm_pmu_ctor function pmu: make sure the reset hook exists before running it secboot: add LS firmware post-run hooks secboot: support for loading LS PMU firmware secboot: base support for PMU falcon secboot: write PMU firmware version into register secboot: enable PMU in r352 ACR secboot: support optional falcons gm20b: enable PMU Deepak Goyal (2): pmu: support for GM20X pmu: support for GM20B signed firmware drm/nouveau/include/nvkm/core/client.h | 4 +- drm/nouveau/include/nvkm/core/device.h | 2 +- drm/nouveau/include/nvkm/core/falcon.h | 51 +- drm/nouveau/include/nvkm/core/subdev.h | 2 +- drm/nouveau/include/nvkm/subdev/pmu.h | 12 +- drm/nouveau/include/nvkm/subdev/secboot.h | 30 +- drm/nouveau/nvkm/core/Kbuild | 1 +- drm/nouveau/nvkm/core/falcon.c | 72 +- drm/nouveau/nvkm/engine/device/base.c | 1 +- drm/nouveau/nvkm/engine/gr/gf100.c | 16 +- drm/nouveau/nvkm/engine/gr/gm200.c | 6 +- drm/nouveau/nvkm/subdev/pmu/Kbuild | 3 +- drm/nouveau/nvkm/subdev/pmu/base.c | 68 +- drm/nouveau/nvkm/subdev/pmu/gm200.c | 713 +++++++++- drm/nouveau/nvkm/subdev/pmu/gm200.h | 104 +- drm/nouveau/nvkm/subdev/pmu/nv_0137c63d.c | 255 +++- drm/nouveau/nvkm/subdev/pmu/nv_pmu.h | 53 +- drm/nouveau/nvkm/subdev/pmu/priv.h | 22 +- drm/nouveau/nvkm/subdev/secboot/Kbuild | 5 +- drm/nouveau/nvkm/subdev/secboot/acr.c | 54 +- drm/nouveau/nvkm/subdev/secboot/acr.h | 73 +- drm/nouveau/nvkm/subdev/secboot/acr_r352.c | 1111 ++++++++++++++- drm/nouveau/nvkm/subdev/secboot/acr_r352.h | 252 +++- drm/nouveau/nvkm/subdev/secboot/acr_r361.c | 135 ++- drm/nouveau/nvkm/subdev/secboot/base.c | 151 +- drm/nouveau/nvkm/subdev/secboot/gm200.c | 1337 +----------------- drm/nouveau/nvkm/subdev/secboot/gm200.h | 43 +- drm/nouveau/nvkm/subdev/secboot/gm20b.c | 128 +-- drm/nouveau/nvkm/subdev/secboot/ls_ucode.h | 153 ++- drm/nouveau/nvkm/subdev/secboot/ls_ucode_gr.c | 158 ++- drm/nouveau/nvkm/subdev/secboot/ls_ucode_pmu.c | 89 +- drm/nouveau/nvkm/subdev/secboot/priv.h | 199 +--- 32 files changed, 3636 insertions(+), 1667 deletions(-) create mode 100644 drm/nouveau/include/nvkm/core/falcon.h create mode 100644 drm/nouveau/nvkm/core/falcon.c create mode 100644 drm/nouveau/nvkm/subdev/pmu/gm200.c create mode 100644 drm/nouveau/nvkm/subdev/pmu/gm200.h create mode 100644 drm/nouveau/nvkm/subdev/pmu/nv_0137c63d.c create mode 100644 drm/nouveau/nvkm/subdev/pmu/nv_pmu.h create mode 100644 drm/nouveau/nvkm/subdev/secboot/acr.c create mode 100644 drm/nouveau/nvkm/subdev/secboot/acr.h create mode 100644 drm/nouveau/nvkm/subdev/secboot/acr_r352.c create mode 100644 drm/nouveau/nvkm/subdev/secboot/acr_r352.h create mode 100644 drm/nouveau/nvkm/subdev/secboot/acr_r361.c create mode 100644 drm/nouveau/nvkm/subdev/secboot/gm200.h create mode 100644 drm/nouveau/nvkm/subdev/secboot/ls_ucode.h create mode 100644 drm/nouveau/nvkm/subdev/secboot/ls_ucode_gr.c create mode 100644 drm/nouveau/nvkm/subdev/secboot/ls_ucode_pmu.c -- git-series 0.8.10
Alexandre Courbot
2016-Nov-21 08:28 UTC
[Nouveau] [PATCH v4 1/33] core: constify nv*_printk macros
Constify the local variables declared in these macros so we can pass const pointers to them. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/include/nvkm/core/client.h | 4 ++-- drm/nouveau/include/nvkm/core/device.h | 2 +- drm/nouveau/include/nvkm/core/subdev.h | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/drm/nouveau/include/nvkm/core/client.h b/drm/nouveau/include/nvkm/core/client.h index eaf5905a87a3..99083349c3d4 100644 --- a/drm/nouveau/include/nvkm/core/client.h +++ b/drm/nouveau/include/nvkm/core/client.h @@ -37,8 +37,8 @@ int nvkm_client_notify_put(struct nvkm_client *, int index); /* logging for client-facing objects */ #define nvif_printk(o,l,p,f,a...) do { \ - struct nvkm_object *_object = (o); \ - struct nvkm_client *_client = _object->client; \ + const struct nvkm_object *_object = (o); \ + const struct nvkm_client *_client = _object->client; \ if (_client->debug >= NV_DBG_##l) \ printk(KERN_##p "nouveau: %s:%08x:%08x: "f, _client->name, \ _object->handle, _object->oclass, ##a); \ diff --git a/drm/nouveau/include/nvkm/core/device.h b/drm/nouveau/include/nvkm/core/device.h index 6bc712f32c8b..d426b86e2712 100644 --- a/drm/nouveau/include/nvkm/core/device.h +++ b/drm/nouveau/include/nvkm/core/device.h @@ -262,7 +262,7 @@ extern const struct nvkm_sclass nvkm_udevice_sclass; /* device logging */ #define nvdev_printk_(d,l,p,f,a...) do { \ - struct nvkm_device *_device = (d); \ + const struct nvkm_device *_device = (d); \ if (_device->debug >= (l)) \ dev_##p(_device->dev, f, ##a); \ } while(0) diff --git a/drm/nouveau/include/nvkm/core/subdev.h b/drm/nouveau/include/nvkm/core/subdev.h index 57adefa8b08e..ca9ed3d68f44 100644 --- a/drm/nouveau/include/nvkm/core/subdev.h +++ b/drm/nouveau/include/nvkm/core/subdev.h @@ -32,7 +32,7 @@ void nvkm_subdev_intr(struct nvkm_subdev *); /* subdev logging */ #define nvkm_printk_(s,l,p,f,a...) do { \ - struct nvkm_subdev *_subdev = (s); \ + const struct nvkm_subdev *_subdev = (s); \ if (_subdev->debug >= (l)) { \ dev_##p(_subdev->device->dev, "%s: "f, \ nvkm_subdev_name[_subdev->index], ##a); \ -- git-series 0.8.10
Alexandre Courbot
2016-Nov-21 08:28 UTC
[Nouveau] [PATCH v4 2/33] core: add falcon library
Some falcon functionality, like loading code/data into IMEM/DMEM, is re-implemented in various parts of the driver. Create a small falcon library that will contain most common operations in order to avoid duplicate code. For now this library contains various defines that are used in secure boot code, plus IMEM and DMEM loading functions. In addition to the library itself, this patch updates users of the previously secure-boot only definitions to use the new global ones. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/include/nvkm/core/falcon.h | 50 +++++++++++++++++++- drm/nouveau/include/nvkm/subdev/secboot.h | 16 +----- drm/nouveau/nvkm/core/Kbuild | 1 +- drm/nouveau/nvkm/core/falcon.c | 62 ++++++++++++++++++++++++- drm/nouveau/nvkm/engine/gr/gf100.c | 16 +++--- drm/nouveau/nvkm/engine/gr/gm200.c | 6 +-- drm/nouveau/nvkm/subdev/secboot/base.c | 23 ++------- drm/nouveau/nvkm/subdev/secboot/gm200.c | 48 +++++++------------ drm/nouveau/nvkm/subdev/secboot/gm20b.c | 4 +- drm/nouveau/nvkm/subdev/secboot/priv.h | 8 +-- 10 files changed, 159 insertions(+), 75 deletions(-) create mode 100644 drm/nouveau/include/nvkm/core/falcon.h create mode 100644 drm/nouveau/nvkm/core/falcon.c diff --git a/drm/nouveau/include/nvkm/core/falcon.h b/drm/nouveau/include/nvkm/core/falcon.h new file mode 100644 index 000000000000..530119847163 --- /dev/null +++ b/drm/nouveau/include/nvkm/core/falcon.h @@ -0,0 +1,50 @@ +/* + * Copyright (c) 2014, NVIDIA CORPORATION. All rights reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + * DEALINGS IN THE SOFTWARE. + */ + +#ifndef __NVKM_FALCON_H__ +#define __NVKM_FALCON_H__ + +#include <core/device.h> + +enum nvkm_falconidx { + NVKM_FALCON_PMU = 0, + NVKM_FALCON_RESERVED = 1, + NVKM_FALCON_FECS = 2, + NVKM_FALCON_GPCCS = 3, + NVKM_FALCON_END = 4, + NVKM_FALCON_INVALID = 0xffffffff, +}; + +enum nvkm_falcon_dmaidx { + FALCON_DMAIDX_UCODE = 0, + FALCON_DMAIDX_VIRT = 1, + FALCON_DMAIDX_PHYS_VID = 2, + FALCON_DMAIDX_PHYS_SYS_COH = 3, + FALCON_DMAIDX_PHYS_SYS_NCOH = 4, +}; + +extern const char *nvkm_falcon_name[]; + +void nvkm_falcon_load_imem(struct nvkm_device *, u32, void *, u32, u32, u32); +void nvkm_falcon_load_dmem(struct nvkm_device *, u32, void *, u32, u32); + +#endif diff --git a/drm/nouveau/include/nvkm/subdev/secboot.h b/drm/nouveau/include/nvkm/subdev/secboot.h index b04c38c07761..ffc2204d2a50 100644 --- a/drm/nouveau/include/nvkm/subdev/secboot.h +++ b/drm/nouveau/include/nvkm/subdev/secboot.h @@ -24,15 +24,7 @@ #define __NVKM_SECURE_BOOT_H__ #include <core/subdev.h> - -enum nvkm_secboot_falcon { - NVKM_SECBOOT_FALCON_PMU = 0, - NVKM_SECBOOT_FALCON_RESERVED = 1, - NVKM_SECBOOT_FALCON_FECS = 2, - NVKM_SECBOOT_FALCON_GPCCS = 3, - NVKM_SECBOOT_FALCON_END = 4, - NVKM_SECBOOT_FALCON_INVALID = 0xffffffff, -}; +#include <core/falcon.h> /** * @base: base IO address of the falcon performing secure boot @@ -48,9 +40,9 @@ struct nvkm_secboot { }; #define nvkm_secboot(p) container_of((p), struct nvkm_secboot, subdev) -bool nvkm_secboot_is_managed(struct nvkm_secboot *, enum nvkm_secboot_falcon); -int nvkm_secboot_reset(struct nvkm_secboot *, u32 falcon); -int nvkm_secboot_start(struct nvkm_secboot *, u32 falcon); +bool nvkm_secboot_is_managed(struct nvkm_secboot *, enum nvkm_falconidx); +int nvkm_secboot_reset(struct nvkm_secboot *, enum nvkm_falconidx); +int nvkm_secboot_start(struct nvkm_secboot *, enum nvkm_falconidx); int gm200_secboot_new(struct nvkm_device *, int, struct nvkm_secboot **); int gm20b_secboot_new(struct nvkm_device *, int, struct nvkm_secboot **); diff --git a/drm/nouveau/nvkm/core/Kbuild b/drm/nouveau/nvkm/core/Kbuild index 86a31a8e1e51..4196e4620c3b 100644 --- a/drm/nouveau/nvkm/core/Kbuild +++ b/drm/nouveau/nvkm/core/Kbuild @@ -13,3 +13,4 @@ nvkm-y += nvkm/core/oproxy.o nvkm-y += nvkm/core/option.o nvkm-y += nvkm/core/ramht.o nvkm-y += nvkm/core/subdev.o +nvkm-y += nvkm/core/falcon.o diff --git a/drm/nouveau/nvkm/core/falcon.c b/drm/nouveau/nvkm/core/falcon.c new file mode 100644 index 000000000000..806de4088a29 --- /dev/null +++ b/drm/nouveau/nvkm/core/falcon.c @@ -0,0 +1,62 @@ +/* + * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + * DEALINGS IN THE SOFTWARE. + */ + +#include <core/falcon.h> + +const char * +nvkm_falcon_name[] = { + [NVKM_FALCON_PMU] = "PMU", + [NVKM_FALCON_RESERVED] = "<reserved>", + [NVKM_FALCON_FECS] = "FECS", + [NVKM_FALCON_GPCCS] = "GPCCS", + [NVKM_FALCON_END] = "<invalid>", +}; + +void +nvkm_falcon_load_imem(struct nvkm_device *device, u32 base, void *data, + u32 start, u32 size, u32 tag) +{ + int i; + + nvkm_wr32(device, base + 0x180, start | (0x1 << 24)); + for (i = 0; i < size / 4; i++) { + /* write new tag every 256B */ + if ((i & 0x3f) == 0) { + nvkm_wr32(device, base + 0x188, tag & 0xffff); + tag++; + } + nvkm_wr32(device, base + 0x184, ((u32 *)data)[i]); + } + nvkm_wr32(device, base + 0x188, 0); +} + +void +nvkm_falcon_load_dmem(struct nvkm_device *device, u32 base, void *data, + u32 start, u32 size) +{ + int i; + + nvkm_wr32(device, base + 0x1c0, start | (0x1 << 24)); + for (i = 0; i < size / 4; i++) + nvkm_wr32(device, base + 0x1c4, ((u32 *)data)[i]); +} + diff --git a/drm/nouveau/nvkm/engine/gr/gf100.c b/drm/nouveau/nvkm/engine/gr/gf100.c index 60a1b5c8214b..b8095ca89352 100644 --- a/drm/nouveau/nvkm/engine/gr/gf100.c +++ b/drm/nouveau/nvkm/engine/gr/gf100.c @@ -1464,16 +1464,16 @@ gf100_gr_init_ctxctl(struct gf100_gr *gr) nvkm_mc_unk260(device, 0); /* securely-managed falcons must be reset using secure boot */ - if (nvkm_secboot_is_managed(sb, NVKM_SECBOOT_FALCON_FECS)) - ret = nvkm_secboot_reset(sb, NVKM_SECBOOT_FALCON_FECS); + if (nvkm_secboot_is_managed(sb, NVKM_FALCON_FECS)) + ret = nvkm_secboot_reset(sb, NVKM_FALCON_FECS); else gf100_gr_init_fw(gr, 0x409000, &gr->fuc409c, &gr->fuc409d); if (ret) return ret; - if (nvkm_secboot_is_managed(sb, NVKM_SECBOOT_FALCON_GPCCS)) - ret = nvkm_secboot_reset(sb, NVKM_SECBOOT_FALCON_GPCCS); + if (nvkm_secboot_is_managed(sb, NVKM_FALCON_GPCCS)) + ret = nvkm_secboot_reset(sb, NVKM_FALCON_GPCCS); else gf100_gr_init_fw(gr, 0x41a000, &gr->fuc41ac, &gr->fuc41ad); @@ -1487,12 +1487,12 @@ gf100_gr_init_ctxctl(struct gf100_gr *gr) nvkm_wr32(device, 0x41a10c, 0x00000000); nvkm_wr32(device, 0x40910c, 0x00000000); - if (nvkm_secboot_is_managed(sb, NVKM_SECBOOT_FALCON_GPCCS)) - nvkm_secboot_start(sb, NVKM_SECBOOT_FALCON_GPCCS); + if (nvkm_secboot_is_managed(sb, NVKM_FALCON_GPCCS)) + nvkm_secboot_start(sb, NVKM_FALCON_GPCCS); else nvkm_wr32(device, 0x41a100, 0x00000002); - if (nvkm_secboot_is_managed(sb, NVKM_SECBOOT_FALCON_FECS)) - nvkm_secboot_start(sb, NVKM_SECBOOT_FALCON_FECS); + if (nvkm_secboot_is_managed(sb, NVKM_FALCON_FECS)) + nvkm_secboot_start(sb, NVKM_FALCON_FECS); else nvkm_wr32(device, 0x409100, 0x00000002); if (nvkm_msec(device, 2000, diff --git a/drm/nouveau/nvkm/engine/gr/gm200.c b/drm/nouveau/nvkm/engine/gr/gm200.c index 6435f1257572..7f4de8e3c643 100644 --- a/drm/nouveau/nvkm/engine/gr/gm200.c +++ b/drm/nouveau/nvkm/engine/gr/gm200.c @@ -184,14 +184,12 @@ gm200_gr_new_(const struct gf100_gr_func *func, struct nvkm_device *device, return ret; /* Load firmwares for non-secure falcons */ - if (!nvkm_secboot_is_managed(device->secboot, - NVKM_SECBOOT_FALCON_FECS)) { + if (!nvkm_secboot_is_managed(device->secboot, NVKM_FALCON_FECS)) { if ((ret = gf100_gr_ctor_fw(gr, "gr/fecs_inst", &gr->fuc409c)) || (ret = gf100_gr_ctor_fw(gr, "gr/fecs_data", &gr->fuc409d))) return ret; } - if (!nvkm_secboot_is_managed(device->secboot, - NVKM_SECBOOT_FALCON_GPCCS)) { + if (!nvkm_secboot_is_managed(device->secboot, NVKM_FALCON_GPCCS)) { if ((ret = gf100_gr_ctor_fw(gr, "gr/gpccs_inst", &gr->fuc41ac)) || (ret = gf100_gr_ctor_fw(gr, "gr/gpccs_data", &gr->fuc41ad))) return ret; diff --git a/drm/nouveau/nvkm/subdev/secboot/base.c b/drm/nouveau/nvkm/subdev/secboot/base.c index 314be2192b7d..6b3346ff0253 100644 --- a/drm/nouveau/nvkm/subdev/secboot/base.c +++ b/drm/nouveau/nvkm/subdev/secboot/base.c @@ -21,18 +21,10 @@ */ #include "priv.h" +#include <core/falcon.h> #include <subdev/mc.h> #include <subdev/timer.h> -static const char * -managed_falcons_names[] = { - [NVKM_SECBOOT_FALCON_PMU] = "PMU", - [NVKM_SECBOOT_FALCON_RESERVED] = "<reserved>", - [NVKM_SECBOOT_FALCON_FECS] = "FECS", - [NVKM_SECBOOT_FALCON_GPCCS] = "GPCCS", - [NVKM_SECBOOT_FALCON_END] = "<invalid>", -}; - /* * Helper falcon functions */ @@ -155,7 +147,6 @@ nvkm_secboot_falcon_run(struct nvkm_secboot *sb) return 0; } - /** * nvkm_secboot_reset() - reset specified falcon */ @@ -190,8 +181,7 @@ nvkm_secboot_start(struct nvkm_secboot *sb, u32 falcon) * nvkm_secboot_is_managed() - check whether a given falcon is securely-managed */ bool -nvkm_secboot_is_managed(struct nvkm_secboot *secboot, - enum nvkm_secboot_falcon fid) +nvkm_secboot_is_managed(struct nvkm_secboot *secboot, enum nvkm_falconidx fid) { if (!secboot) return false; @@ -253,14 +243,14 @@ nvkm_secboot_ctor(const struct nvkm_secboot_func *func, struct nvkm_device *device, int index, struct nvkm_secboot *sb) { - unsigned long fid; + unsigned long id; nvkm_subdev_ctor(&nvkm_secboot, device, index, &sb->subdev); sb->func = func; /* setup the performing falcon's base address and masks */ switch (func->boot_falcon) { - case NVKM_SECBOOT_FALCON_PMU: + case NVKM_FALCON_PMU: sb->devidx = NVKM_SUBDEV_PMU; sb->base = 0x10a000; break; @@ -270,9 +260,8 @@ nvkm_secboot_ctor(const struct nvkm_secboot_func *func, }; nvkm_debug(&sb->subdev, "securely managed falcons:\n"); - for_each_set_bit(fid, &sb->func->managed_falcons, - NVKM_SECBOOT_FALCON_END) - nvkm_debug(&sb->subdev, "- %s\n", managed_falcons_names[fid]); + for_each_set_bit(id, &sb->func->managed_falcons, NVKM_FALCON_END) + nvkm_debug(&sb->subdev, "- %s\n", nvkm_falcon_name[id]); return 0; } diff --git a/drm/nouveau/nvkm/subdev/secboot/gm200.c b/drm/nouveau/nvkm/subdev/secboot/gm200.c index ec48e4ace37a..a525d09afa37 100644 --- a/drm/nouveau/nvkm/subdev/secboot/gm200.c +++ b/drm/nouveau/nvkm/subdev/secboot/gm200.c @@ -87,14 +87,6 @@ #include <core/firmware.h> #include <subdev/fb.h> -enum { - FALCON_DMAIDX_UCODE = 0, - FALCON_DMAIDX_VIRT = 1, - FALCON_DMAIDX_PHYS_VID = 2, - FALCON_DMAIDX_PHYS_SYS_COH = 3, - FALCON_DMAIDX_PHYS_SYS_NCOH = 4, -}; - /** * struct fw_bin_header - header of firmware files * @bin_magic: always 0x3b1d14f0 @@ -296,7 +288,7 @@ struct ls_ucode_img_desc { */ struct ls_ucode_img { struct list_head node; - enum nvkm_secboot_falcon falcon_id; + enum nvkm_falconidx falcon_id; struct ls_ucode_img_desc ucode_desc; u32 *ucode_header; @@ -531,14 +523,14 @@ static int ls_ucode_img_load_fecs(struct nvkm_subdev *subdev, struct ls_ucode_img *img) { return ls_ucode_img_load_generic(subdev, img, "fecs", - NVKM_SECBOOT_FALCON_FECS); + NVKM_FALCON_FECS); } static int ls_ucode_img_load_gpccs(struct nvkm_subdev *subdev, struct ls_ucode_img *img) { return ls_ucode_img_load_generic(subdev, img, "gpccs", - NVKM_SECBOOT_FALCON_GPCCS); + NVKM_FALCON_GPCCS); } /** @@ -564,9 +556,9 @@ ls_ucode_img_load(struct nvkm_subdev *subdev, lsf_load_func load_func) } static const lsf_load_func lsf_load_funcs[] = { - [NVKM_SECBOOT_FALCON_END] = NULL, /* reserve enough space */ - [NVKM_SECBOOT_FALCON_FECS] = ls_ucode_img_load_fecs, - [NVKM_SECBOOT_FALCON_GPCCS] = ls_ucode_img_load_gpccs, + [NVKM_FALCON_END] = NULL, /* reserve enough space */ + [NVKM_FALCON_FECS] = ls_ucode_img_load_fecs, + [NVKM_FALCON_GPCCS] = ls_ucode_img_load_gpccs, }; /** @@ -685,7 +677,7 @@ ls_ucode_img_fill_headers(struct gm200_secboot *gsb, struct ls_ucode_img *img, lhdr->flags = LSF_FLAG_DMACTL_REQ_CTX; /* GPCCS will be loaded using PRI */ - if (img->falcon_id == NVKM_SECBOOT_FALCON_GPCCS) + if (img->falcon_id == NVKM_FALCON_GPCCS) lhdr->flags |= LSF_FLAG_FORCE_PRIV_LOAD; /* Align (size bloat) and save off BL descriptor size */ @@ -794,7 +786,7 @@ ls_ucode_mgr_write_wpr(struct gm200_secboot *gsb, struct ls_ucode_mgr *mgr, pos += sizeof(img->wpr_header); } - nvkm_wo32(wpr_blob, pos, NVKM_SECBOOT_FALCON_INVALID); + nvkm_wo32(wpr_blob, pos, NVKM_FALCON_INVALID); nvkm_done(wpr_blob); @@ -824,7 +816,7 @@ gm200_secboot_prepare_ls_blob(struct gm200_secboot *gsb) /* Load all LS blobs */ for_each_set_bit(falcon_id, &gsb->base.func->managed_falcons, - NVKM_SECBOOT_FALCON_END) { + NVKM_FALCON_END) { struct ls_ucode_img *img; img = ls_ucode_img_load(&sb->subdev, lsf_load_funcs[falcon_id]); @@ -1258,7 +1250,7 @@ done: * falcons should have their LS firmware loaded and be ready to run. */ int -gm200_secboot_reset(struct nvkm_secboot *sb, enum nvkm_secboot_falcon falcon) +gm200_secboot_reset(struct nvkm_secboot *sb, enum nvkm_falconidx falcon) { struct gm200_secboot *gsb = gm200_secboot(sb); int ret; @@ -1276,12 +1268,12 @@ gm200_secboot_reset(struct nvkm_secboot *sb, enum nvkm_secboot_falcon falcon) * Once we have proper PMU firmware and support, this will be changed * to a proper call to the PMU method. */ - if (falcon != NVKM_SECBOOT_FALCON_FECS) + if (falcon != NVKM_FALCON_FECS) goto end; /* If WPR is set and we have an unload blob, run it to unlock WPR */ if (gsb->acr_unload_blob && - gsb->falcon_state[NVKM_SECBOOT_FALCON_FECS] != NON_SECURE) { + gsb->falcon_state[NVKM_FALCON_FECS] != NON_SECURE) { ret = gm200_secboot_run_hs_blob(gsb, gsb->acr_unload_blob, &gsb->acr_unload_bl_desc); if (ret) @@ -1300,16 +1292,16 @@ end: } int -gm200_secboot_start(struct nvkm_secboot *sb, enum nvkm_secboot_falcon falcon) +gm200_secboot_start(struct nvkm_secboot *sb, enum nvkm_falconidx falcon) { struct gm200_secboot *gsb = gm200_secboot(sb); int base; switch (falcon) { - case NVKM_SECBOOT_FALCON_FECS: + case NVKM_FALCON_FECS: base = 0x409000; break; - case NVKM_SECBOOT_FALCON_GPCCS: + case NVKM_FALCON_GPCCS: base = 0x41a000; break; default: @@ -1373,11 +1365,11 @@ gm200_secboot_fini(struct nvkm_secboot *sb, bool suspend) /* Run the unload blob to unprotect the WPR region */ if (gsb->acr_unload_blob && - gsb->falcon_state[NVKM_SECBOOT_FALCON_FECS] != NON_SECURE) + gsb->falcon_state[NVKM_FALCON_FECS] != NON_SECURE) ret = gm200_secboot_run_hs_blob(gsb, gsb->acr_unload_blob, &gsb->acr_unload_bl_desc); - for (i = 0; i < NVKM_SECBOOT_FALCON_END; i++) + for (i = 0; i < NVKM_FALCON_END; i++) gsb->falcon_state[i] = NON_SECURE; return ret; @@ -1409,9 +1401,9 @@ gm200_secboot = { .fini = gm200_secboot_fini, .reset = gm200_secboot_reset, .start = gm200_secboot_start, - .managed_falcons = BIT(NVKM_SECBOOT_FALCON_FECS) | - BIT(NVKM_SECBOOT_FALCON_GPCCS), - .boot_falcon = NVKM_SECBOOT_FALCON_PMU, + .managed_falcons = BIT(NVKM_FALCON_FECS) | + BIT(NVKM_FALCON_GPCCS), + .boot_falcon = NVKM_FALCON_PMU, }; /** diff --git a/drm/nouveau/nvkm/subdev/secboot/gm20b.c b/drm/nouveau/nvkm/subdev/secboot/gm20b.c index d5395ebfe8d3..66a1d01f45ce 100644 --- a/drm/nouveau/nvkm/subdev/secboot/gm20b.c +++ b/drm/nouveau/nvkm/subdev/secboot/gm20b.c @@ -192,8 +192,8 @@ gm20b_secboot = { .init = gm20b_secboot_init, .reset = gm200_secboot_reset, .start = gm200_secboot_start, - .managed_falcons = BIT(NVKM_SECBOOT_FALCON_FECS), - .boot_falcon = NVKM_SECBOOT_FALCON_PMU, + .managed_falcons = BIT(NVKM_FALCON_FECS), + .boot_falcon = NVKM_FALCON_PMU, }; int diff --git a/drm/nouveau/nvkm/subdev/secboot/priv.h b/drm/nouveau/nvkm/subdev/secboot/priv.h index a9a8a0e1017e..b1ef3d1b4c9d 100644 --- a/drm/nouveau/nvkm/subdev/secboot/priv.h +++ b/drm/nouveau/nvkm/subdev/secboot/priv.h @@ -30,11 +30,11 @@ struct nvkm_secboot_func { int (*init)(struct nvkm_secboot *); int (*fini)(struct nvkm_secboot *, bool suspend); void *(*dtor)(struct nvkm_secboot *); - int (*reset)(struct nvkm_secboot *, enum nvkm_secboot_falcon); - int (*start)(struct nvkm_secboot *, enum nvkm_secboot_falcon); + int (*reset)(struct nvkm_secboot *, enum nvkm_falconidx); + int (*start)(struct nvkm_secboot *, enum nvkm_falconidx); /* ID of the falcon that will perform secure boot */ - enum nvkm_secboot_falcon boot_falcon; + enum nvkm_falconidx boot_falcon; /* Bit-mask of IDs of managed falcons */ unsigned long managed_falcons; }; @@ -191,7 +191,7 @@ struct gm200_secboot { RESET, /* In low-secure mode and running */ RUNNING, - } falcon_state[NVKM_SECBOOT_FALCON_END]; + } falcon_state[NVKM_FALCON_END]; bool firmware_ok; }; -- git-series 0.8.10
Alexandre Courbot
2016-Nov-21 08:29 UTC
[Nouveau] [PATCH v4 3/33] secboot: use falcon library's IMEM/DMEM loading functions
Replace the falcon loading functions with calls to the equivalent functions of the falcon library. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/nvkm/subdev/secboot/gm200.c | 31 +++++++------------------- 1 file changed, 9 insertions(+), 22 deletions(-) diff --git a/drm/nouveau/nvkm/subdev/secboot/gm200.c b/drm/nouveau/nvkm/subdev/secboot/gm200.c index a525d09afa37..dcd759930c63 100644 --- a/drm/nouveau/nvkm/subdev/secboot/gm200.c +++ b/drm/nouveau/nvkm/subdev/secboot/gm200.c @@ -1103,39 +1103,26 @@ gm200_secboot_load_hs_bl(struct gm200_secboot *gsb, void *data, u32 data_size) void *hsbl_data = blob_data + hsbl_desc->data_off; u32 code_size = ALIGN(hsbl_desc->code_size, 256); const u32 base = gsb->base.base; - u32 blk; - u32 tag; - int i; + u32 code_start; /* * Copy HS bootloader data */ - nvkm_wr32(device, base + 0x1c0, (0x00000000 | (0x1 << 24))); - for (i = 0; i < hsbl_desc->data_size / 4; i++) - nvkm_wr32(device, base + 0x1c4, ((u32 *)hsbl_data)[i]); + nvkm_falcon_load_dmem(device, gsb->base.base, hsbl_data, 0x00000, + hsbl_desc->data_size); /* * Copy HS bootloader interface structure where the HS descriptor * expects it to be */ - nvkm_wr32(device, base + 0x1c0, - (hsbl_desc->dmem_load_off | (0x1 << 24))); - for (i = 0; i < data_size / 4; i++) - nvkm_wr32(device, base + 0x1c4, ((u32 *)data)[i]); + nvkm_falcon_load_dmem(device, gsb->base.base, data, + hsbl_desc->dmem_load_off, data_size); /* Copy HS bootloader code to end of IMEM */ - blk = (nvkm_rd32(device, base + 0x108) & 0x1ff) - (code_size >> 8); - tag = hsbl_desc->start_tag; - nvkm_wr32(device, base + 0x180, ((blk & 0xff) << 8) | (0x1 << 24)); - for (i = 0; i < code_size / 4; i++) { - /* write new tag every 256B */ - if ((i & 0x3f) == 0) { - nvkm_wr32(device, base + 0x188, tag & 0xffff); - tag++; - } - nvkm_wr32(device, base + 0x184, ((u32 *)hsbl_code)[i]); - } - nvkm_wr32(device, base + 0x188, 0); + code_start = (nvkm_rd32(device, base + 0x108) & 0x1ff) << 8; + code_start -= code_size; + nvkm_falcon_load_imem(device, gsb->base.base, hsbl_code, code_start, + code_size, hsbl_desc->start_tag); } /** -- git-series 0.8.10
Alexandre Courbot
2016-Nov-21 08:29 UTC
[Nouveau] [PATCH v4 4/33] secboot: rename init() hook to oneinit()
The init() hook is called by the subdev's oneinit(). Rename it accordingly to avoid confusion about the lifetime of objects allocated in it. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/nvkm/subdev/secboot/base.c | 4 ++-- drm/nouveau/nvkm/subdev/secboot/gm200.c | 4 ++-- drm/nouveau/nvkm/subdev/secboot/gm20b.c | 6 +++--- drm/nouveau/nvkm/subdev/secboot/priv.h | 4 ++-- 4 files changed, 9 insertions(+), 9 deletions(-) diff --git a/drm/nouveau/nvkm/subdev/secboot/base.c b/drm/nouveau/nvkm/subdev/secboot/base.c index 6b3346ff0253..ea36851358ea 100644 --- a/drm/nouveau/nvkm/subdev/secboot/base.c +++ b/drm/nouveau/nvkm/subdev/secboot/base.c @@ -196,8 +196,8 @@ nvkm_secboot_oneinit(struct nvkm_subdev *subdev) int ret = 0; /* Call chip-specific init function */ - if (sb->func->init) - ret = sb->func->init(sb); + if (sb->func->oneinit) + ret = sb->func->oneinit(sb); if (ret) { nvkm_error(subdev, "Secure Boot initialization failed: %d\n", ret); diff --git a/drm/nouveau/nvkm/subdev/secboot/gm200.c b/drm/nouveau/nvkm/subdev/secboot/gm200.c index dcd759930c63..945afaf457d2 100644 --- a/drm/nouveau/nvkm/subdev/secboot/gm200.c +++ b/drm/nouveau/nvkm/subdev/secboot/gm200.c @@ -1305,7 +1305,7 @@ gm200_secboot_start(struct nvkm_secboot *sb, enum nvkm_falconidx falcon) int -gm200_secboot_init(struct nvkm_secboot *sb) +gm200_secboot_oneinit(struct nvkm_secboot *sb) { struct gm200_secboot *gsb = gm200_secboot(sb); struct nvkm_device *device = sb->subdev.device; @@ -1384,7 +1384,7 @@ gm200_secboot_dtor(struct nvkm_secboot *sb) static const struct nvkm_secboot_func gm200_secboot = { .dtor = gm200_secboot_dtor, - .init = gm200_secboot_init, + .oneinit = gm200_secboot_oneinit, .fini = gm200_secboot_fini, .reset = gm200_secboot_reset, .start = gm200_secboot_start, diff --git a/drm/nouveau/nvkm/subdev/secboot/gm20b.c b/drm/nouveau/nvkm/subdev/secboot/gm20b.c index 66a1d01f45ce..1cb663c31e17 100644 --- a/drm/nouveau/nvkm/subdev/secboot/gm20b.c +++ b/drm/nouveau/nvkm/subdev/secboot/gm20b.c @@ -174,7 +174,7 @@ gm20b_tegra_read_wpr(struct gm200_secboot *gsb) #endif static int -gm20b_secboot_init(struct nvkm_secboot *sb) +gm20b_secboot_oneinit(struct nvkm_secboot *sb) { struct gm200_secboot *gsb = gm200_secboot(sb); int ret; @@ -183,13 +183,13 @@ gm20b_secboot_init(struct nvkm_secboot *sb) if (ret) return ret; - return gm200_secboot_init(sb); + return gm200_secboot_oneinit(sb); } static const struct nvkm_secboot_func gm20b_secboot = { .dtor = gm200_secboot_dtor, - .init = gm20b_secboot_init, + .oneinit = gm20b_secboot_oneinit, .reset = gm200_secboot_reset, .start = gm200_secboot_start, .managed_falcons = BIT(NVKM_FALCON_FECS), diff --git a/drm/nouveau/nvkm/subdev/secboot/priv.h b/drm/nouveau/nvkm/subdev/secboot/priv.h index b1ef3d1b4c9d..baa802b1e9d2 100644 --- a/drm/nouveau/nvkm/subdev/secboot/priv.h +++ b/drm/nouveau/nvkm/subdev/secboot/priv.h @@ -27,7 +27,7 @@ #include <subdev/mmu.h> struct nvkm_secboot_func { - int (*init)(struct nvkm_secboot *); + int (*oneinit)(struct nvkm_secboot *); int (*fini)(struct nvkm_secboot *, bool suspend); void *(*dtor)(struct nvkm_secboot *); int (*reset)(struct nvkm_secboot *, enum nvkm_falconidx); @@ -224,7 +224,7 @@ struct gm200_secboot_func { int (*prepare_blobs)(struct gm200_secboot *); }; -int gm200_secboot_init(struct nvkm_secboot *); +int gm200_secboot_oneinit(struct nvkm_secboot *); void *gm200_secboot_dtor(struct nvkm_secboot *); int gm200_secboot_reset(struct nvkm_secboot *, u32); int gm200_secboot_start(struct nvkm_secboot *, u32); -- git-series 0.8.10
Alexandre Courbot
2016-Nov-21 08:29 UTC
[Nouveau] [PATCH v4 5/33] secboot: remove fixup_hs_desc hook
This hook can be removed if the function writing the HS descriptor is aware of WPR settings. Let's do that as it allows us to make the ACR descriptor structure private and save some code. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/nvkm/subdev/secboot/gm200.c | 95 +++++++++++++++++++------- drm/nouveau/nvkm/subdev/secboot/gm20b.c | 11 +--- drm/nouveau/nvkm/subdev/secboot/priv.h | 60 ++-------------- 3 files changed, 79 insertions(+), 87 deletions(-) diff --git a/drm/nouveau/nvkm/subdev/secboot/gm200.c b/drm/nouveau/nvkm/subdev/secboot/gm200.c index 945afaf457d2..3c22ed64c6af 100644 --- a/drm/nouveau/nvkm/subdev/secboot/gm200.c +++ b/drm/nouveau/nvkm/subdev/secboot/gm200.c @@ -771,7 +771,7 @@ ls_ucode_mgr_write_wpr(struct gm200_secboot *gsb, struct ls_ucode_mgr *mgr, u8 desc[gsb->func->bl_desc_size]; struct gm200_flcn_bl_desc gdesc; - ls_ucode_img_populate_bl_desc(img, gsb->wpr_addr, + ls_ucode_img_populate_bl_desc(img, gsb->acr_wpr_addr, &gdesc); gsb->func->fixup_bl_desc(&gdesc, &desc); nvkm_gpuobj_memcpy_to(wpr_blob, @@ -846,8 +846,11 @@ gm200_secboot_prepare_ls_blob(struct gm200_secboot *gsb) /* If WPR address and size are not fixed, set them to fit the LS blob */ if (!gsb->wpr_size) { - gsb->wpr_addr = gsb->ls_blob->addr; - gsb->wpr_size = gsb->ls_blob->size; + gsb->acr_wpr_addr = gsb->ls_blob->addr; + gsb->acr_wpr_size = gsb->ls_blob->size; + } else { + gsb->acr_wpr_addr = gsb->wpr_addr; + gsb->acr_wpr_size = gsb->wpr_size; } /* Write LS blob */ @@ -925,6 +928,69 @@ gm200_secboot_populate_hsf_bl_desc(void *acr_image, } /** + * struct hsflcn_acr_desc - data section of the HS firmware + * + * This header is to be copied at the beginning of DMEM by the HS bootloader. + * + * @signature: signature of ACR ucode + * @wpr_region_id: region ID holding the WPR header and its details + * @wpr_offset: offset from the WPR region holding the wpr header + * @regions: region descriptors + * @nonwpr_ucode_blob_size: size of LS blob + * @nonwpr_ucode_blob_start: FB location of LS blob is + */ +struct hsflcn_acr_desc { + union { + u8 reserved_dmem[0x200]; + u32 signatures[4]; + } ucode_reserved_space; + u32 wpr_region_id; + u32 wpr_offset; + u32 mmu_mem_range; +#define FLCN_ACR_MAX_REGIONS 2 + struct { + u32 no_regions; + struct { + u32 start_addr; + u32 end_addr; + u32 region_id; + u32 read_mask; + u32 write_mask; + u32 client_mask; + } region_props[FLCN_ACR_MAX_REGIONS]; + } regions; + u32 ucode_blob_size; + u64 ucode_blob_base __aligned(8); + struct { + u32 vpr_enabled; + u32 vpr_start; + u32 vpr_end; + u32 hdcp_policies; + } vpr_desc; +}; + +static void +gm200_secboot_fixup_hs_desc(struct gm200_secboot *gsb, + struct hsflcn_acr_desc *desc) +{ + desc->ucode_blob_base = gsb->ls_blob->addr; + desc->ucode_blob_size = gsb->ls_blob->size; + + desc->wpr_offset = 0; + + /* WPR region information if WPR is not fixed */ + if (gsb->wpr_size == 0) { + desc->wpr_region_id = 1; + desc->regions.no_regions = 1; + desc->regions.region_props[0].region_id = 1; + desc->regions.region_props[0].start_addr + gsb->acr_wpr_addr >> 8; + desc->regions.region_props[0].end_addr + (gsb->acr_wpr_addr + gsb->acr_wpr_size) >> 8; + } +} + +/** * gm200_secboot_prepare_hs_blob - load and prepare a HS blob and BL descriptor * * @gsb secure boot instance to prepare for @@ -957,12 +1023,12 @@ gm200_secboot_prepare_hs_blob(struct gm200_secboot *gsb, const char *fw, acr_data = acr_image + hsbin_hdr->data_offset; - /* Patch descriptor? */ + /* Patch descriptor with WPR information? */ if (patch) { fw_hdr = acr_image + hsbin_hdr->header_offset; load_hdr = acr_image + fw_hdr->hdr_offset; desc = acr_data + load_hdr->data_dma_base; - gsb->func->fixup_hs_desc(gsb, desc); + gm200_secboot_fixup_hs_desc(gsb, desc); } /* Generate HS BL descriptor */ @@ -1404,29 +1470,10 @@ gm200_secboot_fixup_bl_desc(const struct gm200_flcn_bl_desc *desc, void *ret) memcpy(ret, desc, sizeof(*desc)); } -static void -gm200_secboot_fixup_hs_desc(struct gm200_secboot *gsb, - struct hsflcn_acr_desc *desc) -{ - desc->ucode_blob_base = gsb->ls_blob->addr; - desc->ucode_blob_size = gsb->ls_blob->size; - - desc->wpr_offset = 0; - - /* WPR region information for the HS binary to set up */ - desc->wpr_region_id = 1; - desc->regions.no_regions = 1; - desc->regions.region_props[0].region_id = 1; - desc->regions.region_props[0].start_addr = gsb->wpr_addr >> 8; - desc->regions.region_props[0].end_addr - (gsb->wpr_addr + gsb->wpr_size) >> 8; -} - static const struct gm200_secboot_func gm200_secboot_func = { .bl_desc_size = sizeof(struct gm200_flcn_bl_desc), .fixup_bl_desc = gm200_secboot_fixup_bl_desc, - .fixup_hs_desc = gm200_secboot_fixup_hs_desc, .prepare_blobs = gm200_secboot_prepare_blobs, }; diff --git a/drm/nouveau/nvkm/subdev/secboot/gm20b.c b/drm/nouveau/nvkm/subdev/secboot/gm20b.c index 1cb663c31e17..3d9f3748864f 100644 --- a/drm/nouveau/nvkm/subdev/secboot/gm20b.c +++ b/drm/nouveau/nvkm/subdev/secboot/gm20b.c @@ -99,21 +99,10 @@ gm20b_secboot_fixup_bl_desc(const struct gm200_flcn_bl_desc *desc, void *ret) gdesc->data_size = desc->data_size; } -static void -gm20b_secboot_fixup_hs_desc(struct gm200_secboot *gsb, - struct hsflcn_acr_desc *desc) -{ - desc->ucode_blob_base = gsb->ls_blob->addr; - desc->ucode_blob_size = gsb->ls_blob->size; - - desc->wpr_offset = 0; -} - static const struct gm200_secboot_func gm20b_secboot_func = { .bl_desc_size = sizeof(struct gm20b_flcn_bl_desc), .fixup_bl_desc = gm20b_secboot_fixup_bl_desc, - .fixup_hs_desc = gm20b_secboot_fixup_hs_desc, .prepare_blobs = gm20b_secboot_prepare_blobs, }; diff --git a/drm/nouveau/nvkm/subdev/secboot/priv.h b/drm/nouveau/nvkm/subdev/secboot/priv.h index baa802b1e9d2..ce0f3c87212b 100644 --- a/drm/nouveau/nvkm/subdev/secboot/priv.h +++ b/drm/nouveau/nvkm/subdev/secboot/priv.h @@ -91,48 +91,6 @@ struct gm200_flcn_bl_desc { }; /** - * struct hsflcn_acr_desc - data section of the HS firmware - * - * This header is to be copied at the beginning of DMEM by the HS bootloader. - * - * @signature: signature of ACR ucode - * @wpr_region_id: region ID holding the WPR header and its details - * @wpr_offset: offset from the WPR region holding the wpr header - * @regions: region descriptors - * @nonwpr_ucode_blob_size: size of LS blob - * @nonwpr_ucode_blob_start: FB location of LS blob is - */ -struct hsflcn_acr_desc { - union { - u8 reserved_dmem[0x200]; - u32 signatures[4]; - } ucode_reserved_space; - u32 wpr_region_id; - u32 wpr_offset; - u32 mmu_mem_range; -#define FLCN_ACR_MAX_REGIONS 2 - struct { - u32 no_regions; - struct { - u32 start_addr; - u32 end_addr; - u32 region_id; - u32 read_mask; - u32 write_mask; - u32 client_mask; - } region_props[FLCN_ACR_MAX_REGIONS]; - } regions; - u32 ucode_blob_size; - u64 ucode_blob_base __aligned(8); - struct { - u32 vpr_enabled; - u32 vpr_start; - u32 vpr_end; - u32 hdcp_policies; - } vpr_desc; -}; - -/** * Contains the whole secure boot state, allowing it to be performed as needed * @wpr_addr: physical address of the WPR region * @wpr_size: size in bytes of the WPR region @@ -154,14 +112,19 @@ struct gm200_secboot { const struct gm200_secboot_func *func; /* - * Address and size of the WPR region. On dGPU this will be the - * address of the LS blob. On Tegra this is a fixed region set by the - * bootloader + * Address and size of the fixed WPR region, if any. On Tegra this + * region is set by the bootloader */ u64 wpr_addr; u32 wpr_size; /* + * Address and size of the actual WPR region. + */ + u64 acr_wpr_addr; + u32 acr_wpr_size; + + /* * HS FW - lock WPR region (dGPU only) and load LS FWs * on Tegra the HS FW copies the LS blob into the fixed WPR instead */ @@ -203,7 +166,6 @@ struct gm200_secboot { * @fixup_bl_desc: hook that generates the proper BL descriptor format from * the generic GM200 format into a data array of size * bl_desc_size - * @fixup_hs_desc: hook that twiddles the HS descriptor before it is used * @prepare_blobs: prepares the various blobs needed for secure booting */ struct gm200_secboot_func { @@ -215,12 +177,6 @@ struct gm200_secboot_func { u32 bl_desc_size; void (*fixup_bl_desc)(const struct gm200_flcn_bl_desc *, void *); - /* - * Chip-specific modifications of the HS descriptor can be done here. - * On dGPU this is used to fill the information about the WPR region - * we want the HS FW to set up. - */ - void (*fixup_hs_desc)(struct gm200_secboot *, struct hsflcn_acr_desc *); int (*prepare_blobs)(struct gm200_secboot *); }; -- git-series 0.8.10
Alexandre Courbot
2016-Nov-21 08:29 UTC
[Nouveau] [PATCH v4 6/33] secboot: add low-secure firmware hooks
Secure firmwares provided by NVIDIA will follow the same overall principle, but may slightly differ in format, or not use the same bootloader descriptor even on the same chip. In order to handle this as gracefully as possible, turn the LS firmware functions into hooks that can be overloaded as needed. The current hooks cover the external firmware loading as well as the bootloader descriptor generation. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/nvkm/subdev/secboot/gm200.c | 239 ++++--------------------- drm/nouveau/nvkm/subdev/secboot/gm20b.c | 29 +++- drm/nouveau/nvkm/subdev/secboot/priv.h | 193 ++++++++++++++++++++- 3 files changed, 264 insertions(+), 197 deletions(-) diff --git a/drm/nouveau/nvkm/subdev/secboot/gm200.c b/drm/nouveau/nvkm/subdev/secboot/gm200.c index 3c22ed64c6af..e82aee5c5ae7 100644 --- a/drm/nouveau/nvkm/subdev/secboot/gm200.c +++ b/drm/nouveau/nvkm/subdev/secboot/gm200.c @@ -130,175 +130,6 @@ struct fw_bl_desc { }; -/* - * - * LS blob structures - * - */ - -/** - * struct lsf_ucode_desc - LS falcon signatures - * @prd_keys: signature to use when the GPU is in production mode - * @dgb_keys: signature to use when the GPU is in debug mode - * @b_prd_present: whether the production key is present - * @b_dgb_present: whether the debug key is present - * @falcon_id: ID of the falcon the ucode applies to - * - * Directly loaded from a signature file. - */ -struct lsf_ucode_desc { - u8 prd_keys[2][16]; - u8 dbg_keys[2][16]; - u32 b_prd_present; - u32 b_dbg_present; - u32 falcon_id; -}; - -/** - * struct lsf_lsb_header - LS firmware header - * @signature: signature to verify the firmware against - * @ucode_off: offset of the ucode blob in the WPR region. The ucode - * blob contains the bootloader, code and data of the - * LS falcon - * @ucode_size: size of the ucode blob, including bootloader - * @data_size: size of the ucode blob data - * @bl_code_size: size of the bootloader code - * @bl_imem_off: offset in imem of the bootloader - * @bl_data_off: offset of the bootloader data in WPR region - * @bl_data_size: size of the bootloader data - * @app_code_off: offset of the app code relative to ucode_off - * @app_code_size: size of the app code - * @app_data_off: offset of the app data relative to ucode_off - * @app_data_size: size of the app data - * @flags: flags for the secure bootloader - * - * This structure is written into the WPR region for each managed falcon. Each - * instance is referenced by the lsb_offset member of the corresponding - * lsf_wpr_header. - */ -struct lsf_lsb_header { - struct lsf_ucode_desc signature; - u32 ucode_off; - u32 ucode_size; - u32 data_size; - u32 bl_code_size; - u32 bl_imem_off; - u32 bl_data_off; - u32 bl_data_size; - u32 app_code_off; - u32 app_code_size; - u32 app_data_off; - u32 app_data_size; - u32 flags; -#define LSF_FLAG_LOAD_CODE_AT_0 1 -#define LSF_FLAG_DMACTL_REQ_CTX 4 -#define LSF_FLAG_FORCE_PRIV_LOAD 8 -}; - -/** - * struct lsf_wpr_header - LS blob WPR Header - * @falcon_id: LS falcon ID - * @lsb_offset: offset of the lsb_lsf_header in the WPR region - * @bootstrap_owner: secure falcon reponsible for bootstrapping the LS falcon - * @lazy_bootstrap: skip bootstrapping by ACR - * @status: bootstrapping status - * - * An array of these is written at the beginning of the WPR region, one for - * each managed falcon. The array is terminated by an instance which falcon_id - * is LSF_FALCON_ID_INVALID. - */ -struct lsf_wpr_header { - u32 falcon_id; - u32 lsb_offset; - u32 bootstrap_owner; - u32 lazy_bootstrap; - u32 status; -#define LSF_IMAGE_STATUS_NONE 0 -#define LSF_IMAGE_STATUS_COPY 1 -#define LSF_IMAGE_STATUS_VALIDATION_CODE_FAILED 2 -#define LSF_IMAGE_STATUS_VALIDATION_DATA_FAILED 3 -#define LSF_IMAGE_STATUS_VALIDATION_DONE 4 -#define LSF_IMAGE_STATUS_VALIDATION_SKIPPED 5 -#define LSF_IMAGE_STATUS_BOOTSTRAP_READY 6 -}; - - -/** - * struct ls_ucode_img_desc - descriptor of firmware image - * @descriptor_size: size of this descriptor - * @image_size: size of the whole image - * @bootloader_start_offset: start offset of the bootloader in ucode image - * @bootloader_size: size of the bootloader - * @bootloader_imem_offset: start off set of the bootloader in IMEM - * @bootloader_entry_point: entry point of the bootloader in IMEM - * @app_start_offset: start offset of the LS firmware - * @app_size: size of the LS firmware's code and data - * @app_imem_offset: offset of the app in IMEM - * @app_imem_entry: entry point of the app in IMEM - * @app_dmem_offset: offset of the data in DMEM - * @app_resident_code_offset: offset of app code from app_start_offset - * @app_resident_code_size: size of the code - * @app_resident_data_offset: offset of data from app_start_offset - * @app_resident_data_size: size of data - * - * A firmware image contains the code, data, and bootloader of a given LS - * falcon in a single blob. This structure describes where everything is. - * - * This can be generated from a (bootloader, code, data) set if they have - * been loaded separately, or come directly from a file. - */ -struct ls_ucode_img_desc { - u32 descriptor_size; - u32 image_size; - u32 tools_version; - u32 app_version; - char date[64]; - u32 bootloader_start_offset; - u32 bootloader_size; - u32 bootloader_imem_offset; - u32 bootloader_entry_point; - u32 app_start_offset; - u32 app_size; - u32 app_imem_offset; - u32 app_imem_entry; - u32 app_dmem_offset; - u32 app_resident_code_offset; - u32 app_resident_code_size; - u32 app_resident_data_offset; - u32 app_resident_data_size; - u32 nb_overlays; - struct {u32 start; u32 size; } load_ovl[64]; - u32 compressed; -}; - -/** - * struct ls_ucode_img - temporary storage for loaded LS firmwares - * @node: to link within lsf_ucode_mgr - * @falcon_id: ID of the falcon this LS firmware is for - * @ucode_desc: loaded or generated map of ucode_data - * @ucode_header: header of the firmware - * @ucode_data: firmware payload (code and data) - * @ucode_size: size in bytes of data in ucode_data - * @wpr_header: WPR header to be written to the LS blob - * @lsb_header: LSB header to be written to the LS blob - * - * Preparing the WPR LS blob requires information about all the LS firmwares - * (size, etc) to be known. This structure contains all the data of one LS - * firmware. - */ -struct ls_ucode_img { - struct list_head node; - enum nvkm_falconidx falcon_id; - - struct ls_ucode_img_desc ucode_desc; - u32 *ucode_header; - u8 *ucode_data; - u32 ucode_size; - - struct lsf_wpr_header wpr_header; - struct lsf_lsb_header lsb_header; -}; - /** * struct ls_ucode_mgr - manager for all LS falcon firmwares * @count: number of managed LS falcons @@ -363,7 +194,7 @@ struct hsf_load_header { * it has the required minimum size. */ static void * -gm200_secboot_load_firmware(struct nvkm_subdev *subdev, const char *name, +gm200_secboot_load_firmware(const struct nvkm_subdev *subdev, const char *name, size_t min_size) { const struct firmware *fw; @@ -456,7 +287,7 @@ ls_ucode_img_build(const struct firmware *bl, const struct firmware *code, * blob. Also generate the corresponding ucode descriptor. */ static int -ls_ucode_img_load_generic(struct nvkm_subdev *subdev, +ls_ucode_img_load_generic(const struct nvkm_subdev *subdev, struct ls_ucode_img *img, const char *falcon_name, const u32 falcon_id) { @@ -517,17 +348,17 @@ error: return ret; } -typedef int (*lsf_load_func)(struct nvkm_subdev *, struct ls_ucode_img *); +typedef int (*lsf_load_func)(const struct nvkm_subdev *, struct ls_ucode_img *); -static int -ls_ucode_img_load_fecs(struct nvkm_subdev *subdev, struct ls_ucode_img *img) +int +gm200_ls_load_fecs(const struct nvkm_subdev *subdev, struct ls_ucode_img *img) { return ls_ucode_img_load_generic(subdev, img, "fecs", NVKM_FALCON_FECS); } -static int -ls_ucode_img_load_gpccs(struct nvkm_subdev *subdev, struct ls_ucode_img *img) +int +gm200_ls_load_gpccs(const struct nvkm_subdev *subdev, struct ls_ucode_img *img) { return ls_ucode_img_load_generic(subdev, img, "gpccs", NVKM_FALCON_GPCCS); @@ -555,14 +386,8 @@ ls_ucode_img_load(struct nvkm_subdev *subdev, lsf_load_func load_func) return img; } -static const lsf_load_func lsf_load_funcs[] = { - [NVKM_FALCON_END] = NULL, /* reserve enough space */ - [NVKM_FALCON_FECS] = ls_ucode_img_load_fecs, - [NVKM_FALCON_GPCCS] = ls_ucode_img_load_gpccs, -}; - /** - * ls_ucode_img_populate_bl_desc() - populate a DMEM BL descriptor for LS image + * gm200_secboot_ls_bl_desc() - populate a DMEM BL descriptor for LS image * @img: ucode image to generate against * @desc: descriptor to populate * @sb: secure boot state to use for base addresses @@ -572,10 +397,11 @@ static const lsf_load_func lsf_load_funcs[] = { * */ static void -ls_ucode_img_populate_bl_desc(struct ls_ucode_img *img, u64 wpr_addr, - struct gm200_flcn_bl_desc *desc) +gm200_secboot_ls_bl_desc(const struct ls_ucode_img *img, u64 wpr_addr, + void *_desc) { - struct ls_ucode_img_desc *pdesc = &img->ucode_desc; + struct gm200_flcn_bl_desc *desc = _desc; + const struct ls_ucode_img_desc *pdesc = &img->ucode_desc; u64 addr_base; addr_base = wpr_addr + img->lsb_header.ucode_off + @@ -620,6 +446,8 @@ ls_ucode_img_fill_headers(struct gm200_secboot *gsb, struct ls_ucode_img *img, struct lsf_wpr_header *whdr = &img->wpr_header; struct lsf_lsb_header *lhdr = &img->lsb_header; struct ls_ucode_img_desc *desc = &img->ucode_desc; + const struct secboot_ls_single_func *func + (*gsb->ls_func)[img->falcon_id]; if (img->ucode_header) { nvkm_fatal(&gsb->base.subdev, @@ -680,9 +508,9 @@ ls_ucode_img_fill_headers(struct gm200_secboot *gsb, struct ls_ucode_img *img, if (img->falcon_id == NVKM_FALCON_GPCCS) lhdr->flags |= LSF_FLAG_FORCE_PRIV_LOAD; - /* Align (size bloat) and save off BL descriptor size */ - lhdr->bl_data_size = ALIGN(sizeof(struct gm200_flcn_bl_desc), - LSF_BL_DATA_SIZE_ALIGN); + /* Align and save off BL descriptor size */ + lhdr->bl_data_size = ALIGN(func->bl_desc_size, LSF_BL_DATA_SIZE_ALIGN); + /* * Align, save off, and include the additional BL data */ @@ -768,15 +596,16 @@ ls_ucode_mgr_write_wpr(struct gm200_secboot *gsb, struct ls_ucode_mgr *mgr, /* Generate and write BL descriptor */ if (!img->ucode_header) { - u8 desc[gsb->func->bl_desc_size]; - struct gm200_flcn_bl_desc gdesc; + const struct secboot_ls_single_func *ls_func + (*gsb->ls_func)[img->falcon_id]; + u8 gdesc[ls_func->bl_desc_size]; + + ls_func->generate_bl_desc(img, gsb->acr_wpr_addr, + &gdesc); - ls_ucode_img_populate_bl_desc(img, gsb->acr_wpr_addr, - &gdesc); - gsb->func->fixup_bl_desc(&gdesc, &desc); nvkm_gpuobj_memcpy_to(wpr_blob, img->lsb_header.bl_data_off, - &desc, gsb->func->bl_desc_size); + &gdesc, ls_func->bl_desc_size); } /* Copy ucode */ @@ -815,11 +644,12 @@ gm200_secboot_prepare_ls_blob(struct gm200_secboot *gsb) ls_ucode_mgr_init(&mgr); /* Load all LS blobs */ - for_each_set_bit(falcon_id, &gsb->base.func->managed_falcons, + for_each_set_bit(falcon_id, &sb->func->managed_falcons, NVKM_FALCON_END) { struct ls_ucode_img *img; - img = ls_ucode_img_load(&sb->subdev, lsf_load_funcs[falcon_id]); + img = ls_ucode_img_load(&sb->subdev, + (*gsb->ls_func)[falcon_id]->load); if (IS_ERR(img)) { ret = PTR_ERR(img); @@ -864,6 +694,20 @@ cleanup: return ret; } +static const secboot_ls_func +gm200_ls_func = { + [NVKM_FALCON_FECS] = &(struct secboot_ls_single_func) { + .load = gm200_ls_load_fecs, + .generate_bl_desc = gm200_secboot_ls_bl_desc, + .bl_desc_size = sizeof(struct gm200_flcn_bl_desc), + }, + [NVKM_FALCON_GPCCS] = &(struct secboot_ls_single_func) { + .load = gm200_ls_load_gpccs, + .generate_bl_desc = gm200_secboot_ls_bl_desc, + .bl_desc_size = sizeof(struct gm200_flcn_bl_desc), + }, +}; + /* * High-secure blob creation */ @@ -1496,6 +1340,7 @@ gm200_secboot_new(struct nvkm_device *device, int index, return ret; gsb->func = &gm200_secboot_func; + gsb->ls_func = &gm200_ls_func; return 0; } diff --git a/drm/nouveau/nvkm/subdev/secboot/gm20b.c b/drm/nouveau/nvkm/subdev/secboot/gm20b.c index 3d9f3748864f..d062ccc0166f 100644 --- a/drm/nouveau/nvkm/subdev/secboot/gm20b.c +++ b/drm/nouveau/nvkm/subdev/secboot/gm20b.c @@ -42,6 +42,25 @@ struct gm20b_flcn_bl_desc { u32 data_size; }; +static void +gm20b_secboot_ls_bl_desc(const struct ls_ucode_img *img, u64 wpr_addr, + void *_desc) +{ + struct gm20b_flcn_bl_desc *desc = _desc; + const struct ls_ucode_img_desc *pdesc = &img->ucode_desc; + u64 base; + + base = wpr_addr + img->lsb_header.ucode_off + pdesc->app_start_offset; + + memset(desc, 0, sizeof(*desc)); + desc->ctx_dma = FALCON_DMAIDX_UCODE; + desc->code_dma_base = (base + pdesc->app_resident_code_offset) >> 8; + desc->non_sec_code_size = pdesc->app_resident_code_size; + desc->data_dma_base = (base + pdesc->app_resident_data_offset) >> 8; + desc->data_size = pdesc->app_resident_data_size; + desc->code_entry_point = pdesc->app_imem_entry; +} + static int gm20b_secboot_prepare_blobs(struct gm200_secboot *gsb) { @@ -185,6 +204,15 @@ gm20b_secboot = { .boot_falcon = NVKM_FALCON_PMU, }; +static const secboot_ls_func +gm20b_ls_func = { + [NVKM_FALCON_FECS] = &(struct secboot_ls_single_func) { + .load = gm200_ls_load_fecs, + .generate_bl_desc = gm20b_secboot_ls_bl_desc, + .bl_desc_size = sizeof(struct gm20b_flcn_bl_desc), + }, +}; + int gm20b_secboot_new(struct nvkm_device *device, int index, struct nvkm_secboot **psb) @@ -204,6 +232,7 @@ gm20b_secboot_new(struct nvkm_device *device, int index, return ret; gsb->func = &gm20b_secboot_func; + gsb->ls_func = &gm20b_ls_func; return 0; } diff --git a/drm/nouveau/nvkm/subdev/secboot/priv.h b/drm/nouveau/nvkm/subdev/secboot/priv.h index ce0f3c87212b..2a4c4d5a3c90 100644 --- a/drm/nouveau/nvkm/subdev/secboot/priv.h +++ b/drm/nouveau/nvkm/subdev/secboot/priv.h @@ -44,6 +44,175 @@ int nvkm_secboot_ctor(const struct nvkm_secboot_func *, struct nvkm_device *, int nvkm_secboot_falcon_reset(struct nvkm_secboot *); int nvkm_secboot_falcon_run(struct nvkm_secboot *); +/* + * + * LS blob structures + * + */ + +/** + * struct lsf_ucode_desc - LS falcon signatures + * @prd_keys: signature to use when the GPU is in production mode + * @dgb_keys: signature to use when the GPU is in debug mode + * @b_prd_present: whether the production key is present + * @b_dgb_present: whether the debug key is present + * @falcon_id: ID of the falcon the ucode applies to + * + * Directly loaded from a signature file. + */ +struct lsf_ucode_desc { + u8 prd_keys[2][16]; + u8 dbg_keys[2][16]; + u32 b_prd_present; + u32 b_dbg_present; + u32 falcon_id; +}; + +/** + * struct lsf_lsb_header - LS firmware header + * @signature: signature to verify the firmware against + * @ucode_off: offset of the ucode blob in the WPR region. The ucode + * blob contains the bootloader, code and data of the + * LS falcon + * @ucode_size: size of the ucode blob, including bootloader + * @data_size: size of the ucode blob data + * @bl_code_size: size of the bootloader code + * @bl_imem_off: offset in imem of the bootloader + * @bl_data_off: offset of the bootloader data in WPR region + * @bl_data_size: size of the bootloader data + * @app_code_off: offset of the app code relative to ucode_off + * @app_code_size: size of the app code + * @app_data_off: offset of the app data relative to ucode_off + * @app_data_size: size of the app data + * @flags: flags for the secure bootloader + * + * This structure is written into the WPR region for each managed falcon. Each + * instance is referenced by the lsb_offset member of the corresponding + * lsf_wpr_header. + */ +struct lsf_lsb_header { + struct lsf_ucode_desc signature; + u32 ucode_off; + u32 ucode_size; + u32 data_size; + u32 bl_code_size; + u32 bl_imem_off; + u32 bl_data_off; + u32 bl_data_size; + u32 app_code_off; + u32 app_code_size; + u32 app_data_off; + u32 app_data_size; + u32 flags; +#define LSF_FLAG_LOAD_CODE_AT_0 1 +#define LSF_FLAG_DMACTL_REQ_CTX 4 +#define LSF_FLAG_FORCE_PRIV_LOAD 8 +}; + +/** + * struct lsf_wpr_header - LS blob WPR Header + * @falcon_id: LS falcon ID + * @lsb_offset: offset of the lsb_lsf_header in the WPR region + * @bootstrap_owner: secure falcon reponsible for bootstrapping the LS falcon + * @lazy_bootstrap: skip bootstrapping by ACR + * @status: bootstrapping status + * + * An array of these is written at the beginning of the WPR region, one for + * each managed falcon. The array is terminated by an instance which falcon_id + * is LSF_FALCON_ID_INVALID. + */ +struct lsf_wpr_header { + u32 falcon_id; + u32 lsb_offset; + u32 bootstrap_owner; + u32 lazy_bootstrap; + u32 status; +#define LSF_IMAGE_STATUS_NONE 0 +#define LSF_IMAGE_STATUS_COPY 1 +#define LSF_IMAGE_STATUS_VALIDATION_CODE_FAILED 2 +#define LSF_IMAGE_STATUS_VALIDATION_DATA_FAILED 3 +#define LSF_IMAGE_STATUS_VALIDATION_DONE 4 +#define LSF_IMAGE_STATUS_VALIDATION_SKIPPED 5 +#define LSF_IMAGE_STATUS_BOOTSTRAP_READY 6 +}; + + +/** + * struct ls_ucode_img_desc - descriptor of firmware image + * @descriptor_size: size of this descriptor + * @image_size: size of the whole image + * @bootloader_start_offset: start offset of the bootloader in ucode image + * @bootloader_size: size of the bootloader + * @bootloader_imem_offset: start off set of the bootloader in IMEM + * @bootloader_entry_point: entry point of the bootloader in IMEM + * @app_start_offset: start offset of the LS firmware + * @app_size: size of the LS firmware's code and data + * @app_imem_offset: offset of the app in IMEM + * @app_imem_entry: entry point of the app in IMEM + * @app_dmem_offset: offset of the data in DMEM + * @app_resident_code_offset: offset of app code from app_start_offset + * @app_resident_code_size: size of the code + * @app_resident_data_offset: offset of data from app_start_offset + * @app_resident_data_size: size of data + * + * A firmware image contains the code, data, and bootloader of a given LS + * falcon in a single blob. This structure describes where everything is. + * + * This can be generated from a (bootloader, code, data) set if they have + * been loaded separately, or come directly from a file. + */ +struct ls_ucode_img_desc { + u32 descriptor_size; + u32 image_size; + u32 tools_version; + u32 app_version; + char date[64]; + u32 bootloader_start_offset; + u32 bootloader_size; + u32 bootloader_imem_offset; + u32 bootloader_entry_point; + u32 app_start_offset; + u32 app_size; + u32 app_imem_offset; + u32 app_imem_entry; + u32 app_dmem_offset; + u32 app_resident_code_offset; + u32 app_resident_code_size; + u32 app_resident_data_offset; + u32 app_resident_data_size; + u32 nb_overlays; + struct {u32 start; u32 size; } load_ovl[64]; + u32 compressed; +}; + +/** + * struct ls_ucode_img - temporary storage for loaded LS firmwares + * @node: to link within lsf_ucode_mgr + * @falcon_id: ID of the falcon this LS firmware is for + * @ucode_desc: loaded or generated map of ucode_data + * @ucode_header: header of the firmware + * @ucode_data: firmware payload (code and data) + * @ucode_size: size in bytes of data in ucode_data + * @wpr_header: WPR header to be written to the LS blob + * @lsb_header: LSB header to be written to the LS blob + * + * Preparing the WPR LS blob requires information about all the LS firmwares + * (size, etc) to be known. This structure contains all the data of one LS + * firmware. + */ +struct ls_ucode_img { + struct list_head node; + enum nvkm_falconidx falcon_id; + + struct ls_ucode_img_desc ucode_desc; + u32 *ucode_header; + u8 *ucode_data; + u32 ucode_size; + + struct lsf_wpr_header wpr_header; + struct lsf_lsb_header lsb_header; +}; + struct flcn_u64 { u32 lo; u32 hi; @@ -91,6 +260,29 @@ struct gm200_flcn_bl_desc { }; /** + * struct secboot_ls_single_func - manages a single LS firmware + * + * @load: load the external firmware into a ls_ucode_img + * @generate_bl_desc: function called on a block of bl_desc_size to generate the + * proper bootloader descriptor for this LS firmware + * @bl_desc_size: size of the bootloader descriptor + */ +struct secboot_ls_single_func { + int (*load)(const struct nvkm_subdev *, struct ls_ucode_img *); + void (*generate_bl_desc)(const struct ls_ucode_img *, u64, void *); + u32 bl_desc_size; +}; + +/** + * typedef secboot_ls_func - manages all the LS firmwares for this ACR + */ +typedef const struct secboot_ls_single_func * +secboot_ls_func[NVKM_FALCON_END]; + +int gm200_ls_load_fecs(const struct nvkm_subdev *, struct ls_ucode_img *); +int gm200_ls_load_gpccs(const struct nvkm_subdev *, struct ls_ucode_img *); + +/** * Contains the whole secure boot state, allowing it to be performed as needed * @wpr_addr: physical address of the WPR region * @wpr_size: size in bytes of the WPR region @@ -110,6 +302,7 @@ struct gm200_flcn_bl_desc { struct gm200_secboot { struct nvkm_secboot base; const struct gm200_secboot_func *func; + const secboot_ls_func *ls_func; /* * Address and size of the fixed WPR region, if any. On Tegra this -- git-series 0.8.10
Alexandre Courbot
2016-Nov-21 08:29 UTC
[Nouveau] [PATCH v4 7/33] secboot: generate HS BL descriptor in hook
Use the HS hook to completely generate the HS BL descriptor, similarly to what is done in the LS hook, instead of (arbitrarily) using the acr_v1 format as an intermediate. This allows us to make the bootloader descriptor structures private to each implementation, resulting in a cleaner an more consistent design. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/nvkm/subdev/secboot/gm200.c | 177 ++++++++++++------------- drm/nouveau/nvkm/subdev/secboot/gm20b.c | 43 ++---- drm/nouveau/nvkm/subdev/secboot/priv.h | 71 ++++------ 3 files changed, 137 insertions(+), 154 deletions(-) diff --git a/drm/nouveau/nvkm/subdev/secboot/gm200.c b/drm/nouveau/nvkm/subdev/secboot/gm200.c index e82aee5c5ae7..3d4ae8324547 100644 --- a/drm/nouveau/nvkm/subdev/secboot/gm200.c +++ b/drm/nouveau/nvkm/subdev/secboot/gm200.c @@ -174,21 +174,45 @@ struct hsf_fw_header { u32 hdr_size; }; + /** - * struct hsf_load_header - HS firmware load header + * struct gm200_flcn_bl_desc - DMEM bootloader descriptor + * @signature: 16B signature for secure code. 0s if no secure code + * @ctx_dma: DMA context to be used by BL while loading code/data + * @code_dma_base: 256B-aligned Physical FB Address where code is located + * (falcon's $xcbase register) + * @non_sec_code_off: offset from code_dma_base where the non-secure code is + * located. The offset must be multiple of 256 to help perf + * @non_sec_code_size: the size of the nonSecure code part. + * @sec_code_off: offset from code_dma_base where the secure code is + * located. The offset must be multiple of 256 to help perf + * @sec_code_size: offset from code_dma_base where the secure code is + * located. The offset must be multiple of 256 to help perf + * @code_entry_point: code entry point which will be invoked by BL after + * code is loaded. + * @data_dma_base: 256B aligned Physical FB Address where data is located. + * (falcon's $xdbase register) + * @data_size: size of data block. Should be multiple of 256B + * + * Structure used by the bootloader to load the rest of the code. This has + * to be filled by host and copied into DMEM at offset provided in the + * hsflcn_bl_desc.bl_desc_dmem_load_off. */ -struct hsf_load_header { +struct gm200_flcn_bl_desc { + u32 reserved[4]; + u32 signature[4]; + u32 ctx_dma; + struct flcn_u64 code_dma_base; u32 non_sec_code_off; u32 non_sec_code_size; - u32 data_dma_base; + u32 sec_code_off; + u32 sec_code_size; + u32 code_entry_point; + struct flcn_u64 data_dma_base; u32 data_size; - u32 num_apps; - struct { - u32 sec_code_off; - u32 sec_code_size; - } app[0]; }; + /** * Convenience function to duplicate a firmware file in memory and check that * it has the required minimum size. @@ -739,39 +763,6 @@ gm200_secboot_hsf_patch_signature(struct gm200_secboot *gsb, void *acr_image) } /** - * gm200_secboot_populate_hsf_bl_desc() - populate BL descriptor for HS image - */ -static void -gm200_secboot_populate_hsf_bl_desc(void *acr_image, - struct gm200_flcn_bl_desc *bl_desc) -{ - struct fw_bin_header *hsbin_hdr = acr_image; - struct hsf_fw_header *fw_hdr = acr_image + hsbin_hdr->header_offset; - struct hsf_load_header *load_hdr = acr_image + fw_hdr->hdr_offset; - - /* - * Descriptor for the bootloader that will load the ACR image into - * IMEM/DMEM memory. - */ - fw_hdr = acr_image + hsbin_hdr->header_offset; - load_hdr = acr_image + fw_hdr->hdr_offset; - memset(bl_desc, 0, sizeof(*bl_desc)); - bl_desc->ctx_dma = FALCON_DMAIDX_VIRT; - bl_desc->non_sec_code_off = load_hdr->non_sec_code_off; - bl_desc->non_sec_code_size = load_hdr->non_sec_code_size; - bl_desc->sec_code_off = load_hdr->app[0].sec_code_off; - bl_desc->sec_code_size = load_hdr->app[0].sec_code_size; - bl_desc->code_entry_point = 0; - /* - * We need to set code_dma_base to the virtual address of the acr_blob, - * and add this address to data_dma_base before writing it into DMEM - */ - bl_desc->code_dma_base.lo = 0; - bl_desc->data_dma_base.lo = load_hdr->data_dma_base; - bl_desc->data_size = load_hdr->data_size; -} - -/** * struct hsflcn_acr_desc - data section of the HS firmware * * This header is to be copied at the beginning of DMEM by the HS bootloader. @@ -846,37 +837,44 @@ gm200_secboot_fixup_hs_desc(struct gm200_secboot *gsb, static int gm200_secboot_prepare_hs_blob(struct gm200_secboot *gsb, const char *fw, struct nvkm_gpuobj **blob, - struct gm200_flcn_bl_desc *bl_desc, bool patch) + struct hsf_load_header *load_header, bool patch) { struct nvkm_subdev *subdev = &gsb->base.subdev; void *acr_image; struct fw_bin_header *hsbin_hdr; struct hsf_fw_header *fw_hdr; - void *acr_data; struct hsf_load_header *load_hdr; - struct hsflcn_acr_desc *desc; + void *acr_data; int ret; acr_image = gm200_secboot_load_firmware(subdev, fw, 0); if (IS_ERR(acr_image)) return PTR_ERR(acr_image); + hsbin_hdr = acr_image; + fw_hdr = acr_image + hsbin_hdr->header_offset; + load_hdr = acr_image + fw_hdr->hdr_offset; + acr_data = acr_image + hsbin_hdr->data_offset; /* Patch signature */ gm200_secboot_hsf_patch_signature(gsb, acr_image); - acr_data = acr_image + hsbin_hdr->data_offset; - /* Patch descriptor with WPR information? */ if (patch) { - fw_hdr = acr_image + hsbin_hdr->header_offset; - load_hdr = acr_image + fw_hdr->hdr_offset; + struct hsflcn_acr_desc *desc; + desc = acr_data + load_hdr->data_dma_base; gm200_secboot_fixup_hs_desc(gsb, desc); } - /* Generate HS BL descriptor */ - gm200_secboot_populate_hsf_bl_desc(acr_image, bl_desc); + if (load_hdr->num_apps > GM200_ACR_MAX_APPS) { + nvkm_error(subdev, "more apps (%d) than supported (%d)!", + load_hdr->num_apps, GM200_ACR_MAX_APPS); + ret = -EINVAL; + goto cleanup; + } + memcpy(load_header, load_hdr, sizeof(*load_header) + + (sizeof(load_hdr->app[0]) * load_hdr->num_apps)); /* Create ACR blob and copy HS data to it */ ret = nvkm_gpuobj_new(subdev->device, ALIGN(hsbin_hdr->data_size, 256), @@ -937,7 +935,7 @@ gm20x_secboot_prepare_blobs(struct gm200_secboot *gsb) if (!gsb->acr_load_blob) { ret = gm200_secboot_prepare_hs_blob(gsb, "acr/ucode_load", &gsb->acr_load_blob, - &gsb->acr_load_bl_desc, true); + &gsb->load_bl_header, true); if (ret) return ret; } @@ -965,7 +963,7 @@ gm200_secboot_prepare_blobs(struct gm200_secboot *gsb) if (!gsb->acr_unload_blob) { ret = gm200_secboot_prepare_hs_blob(gsb, "acr/ucode_unload", &gsb->acr_unload_blob, - &gsb->acr_unload_bl_desc, false); + &gsb->unload_bl_header, false); if (ret) return ret; } @@ -1086,35 +1084,37 @@ gm200_secboot_setup_falcon(struct gm200_secboot *gsb) * gm200_secboot_run_hs_blob() - run the given high-secure blob */ static int -gm200_secboot_run_hs_blob(struct gm200_secboot *gsb, struct nvkm_gpuobj *blob, - struct gm200_flcn_bl_desc *desc) +gm200_secboot_run_hs_blob(struct gm200_secboot *gsb, struct nvkm_gpuobj *blob) { struct nvkm_vma vma; - u64 vma_addr; const u32 bl_desc_size = gsb->func->bl_desc_size; + const struct hsf_load_header *load_hdr; u8 bl_desc[bl_desc_size]; int ret; + /* Find the bootloader descriptor for our blob and copy it */ + if (blob == gsb->acr_load_blob) { + load_hdr = &gsb->load_bl_header; + + } else if (blob == gsb->acr_unload_blob) { + load_hdr = &gsb->unload_bl_header; + } else { + nvkm_error(&gsb->base.subdev, "invalid secure boot blob!\n"); + return -EINVAL; + } + /* Map the HS firmware so the HS bootloader can see it */ ret = nvkm_gpuobj_map(blob, gsb->vm, NV_MEM_ACCESS_RW, &vma); if (ret) return ret; - /* Add the mapping address to the DMA bases */ - vma_addr = flcn64_to_u64(desc->code_dma_base) + vma.offset; - desc->code_dma_base.lo = lower_32_bits(vma_addr); - desc->code_dma_base.hi = upper_32_bits(vma_addr); - vma_addr = flcn64_to_u64(desc->data_dma_base) + vma.offset; - desc->data_dma_base.lo = lower_32_bits(vma_addr); - desc->data_dma_base.hi = upper_32_bits(vma_addr); - - /* Fixup the BL header */ - gsb->func->fixup_bl_desc(desc, &bl_desc); + /* Generate the BL header */ + gsb->func->generate_bl_desc(load_hdr, bl_desc, vma.offset); /* Reset the falcon and make it ready to run the HS bootloader */ ret = gm200_secboot_setup_falcon(gsb); if (ret) - goto done; + goto end; /* Load the HS bootloader into the falcon's IMEM/DMEM */ gm200_secboot_load_hs_bl(gsb, &bl_desc, bl_desc_size); @@ -1122,17 +1122,9 @@ gm200_secboot_run_hs_blob(struct gm200_secboot *gsb, struct nvkm_gpuobj *blob, /* Start the HS bootloader */ ret = nvkm_secboot_falcon_run(&gsb->base); if (ret) - goto done; - -done: - /* Restore the original DMA addresses */ - vma_addr = flcn64_to_u64(desc->code_dma_base) - vma.offset; - desc->code_dma_base.lo = lower_32_bits(vma_addr); - desc->code_dma_base.hi = upper_32_bits(vma_addr); - vma_addr = flcn64_to_u64(desc->data_dma_base) - vma.offset; - desc->data_dma_base.lo = lower_32_bits(vma_addr); - desc->data_dma_base.hi = upper_32_bits(vma_addr); + goto end; +end: /* We don't need the ACR firmware anymore */ nvkm_gpuobj_unmap(&vma); @@ -1171,15 +1163,13 @@ gm200_secboot_reset(struct nvkm_secboot *sb, enum nvkm_falconidx falcon) /* If WPR is set and we have an unload blob, run it to unlock WPR */ if (gsb->acr_unload_blob && gsb->falcon_state[NVKM_FALCON_FECS] != NON_SECURE) { - ret = gm200_secboot_run_hs_blob(gsb, gsb->acr_unload_blob, - &gsb->acr_unload_bl_desc); + ret = gm200_secboot_run_hs_blob(gsb, gsb->acr_unload_blob); if (ret) return ret; } /* Reload all managed falcons */ - ret = gm200_secboot_run_hs_blob(gsb, gsb->acr_load_blob, - &gsb->acr_load_bl_desc); + ret = gm200_secboot_run_hs_blob(gsb, gsb->acr_load_blob); if (ret) return ret; @@ -1263,8 +1253,7 @@ gm200_secboot_fini(struct nvkm_secboot *sb, bool suspend) /* Run the unload blob to unprotect the WPR region */ if (gsb->acr_unload_blob && gsb->falcon_state[NVKM_FALCON_FECS] != NON_SECURE) - ret = gm200_secboot_run_hs_blob(gsb, gsb->acr_unload_blob, - &gsb->acr_unload_bl_desc); + ret = gm200_secboot_run_hs_blob(gsb, gsb->acr_unload_blob); for (i = 0; i < NVKM_FALCON_END; i++) gsb->falcon_state[i] = NON_SECURE; @@ -1303,21 +1292,29 @@ gm200_secboot = { .boot_falcon = NVKM_FALCON_PMU, }; -/** - * gm200_fixup_bl_desc - just copy the BL descriptor - * - * Use the GM200 descriptor format by default. - */ static void -gm200_secboot_fixup_bl_desc(const struct gm200_flcn_bl_desc *desc, void *ret) +gm200_secboot_generate_bl_desc(const struct hsf_load_header *hdr, + void *_bl_desc, u64 offset) { - memcpy(ret, desc, sizeof(*desc)); + struct gm200_flcn_bl_desc *bl_desc = _bl_desc; + + memset(bl_desc, 0, sizeof(*bl_desc)); + bl_desc->ctx_dma = FALCON_DMAIDX_VIRT; + bl_desc->non_sec_code_off = hdr->non_sec_code_off; + bl_desc->non_sec_code_size = hdr->non_sec_code_size; + bl_desc->sec_code_off = hdr->app[0].sec_code_off; + bl_desc->sec_code_size = hdr->app[0].sec_code_size; + bl_desc->code_entry_point = 0; + + bl_desc->code_dma_base = u64_to_flcn64(offset); + bl_desc->data_dma_base = u64_to_flcn64(offset + hdr->data_dma_base); + bl_desc->data_size = hdr->data_size; } static const struct gm200_secboot_func gm200_secboot_func = { .bl_desc_size = sizeof(struct gm200_flcn_bl_desc), - .fixup_bl_desc = gm200_secboot_fixup_bl_desc, + .generate_bl_desc = gm200_secboot_generate_bl_desc, .prepare_blobs = gm200_secboot_prepare_blobs, }; diff --git a/drm/nouveau/nvkm/subdev/secboot/gm20b.c b/drm/nouveau/nvkm/subdev/secboot/gm20b.c index d062ccc0166f..403b4d690902 100644 --- a/drm/nouveau/nvkm/subdev/secboot/gm20b.c +++ b/drm/nouveau/nvkm/subdev/secboot/gm20b.c @@ -87,41 +87,28 @@ gm20b_secboot_prepare_blobs(struct gm200_secboot *gsb) return 0; } -/** - * gm20b_secboot_fixup_bl_desc - adapt BL descriptor to format used by GM20B FW - * - * There is only a slight format difference (DMA addresses being 32-bits and - * 256B-aligned) to address. - */ static void -gm20b_secboot_fixup_bl_desc(const struct gm200_flcn_bl_desc *desc, void *ret) +gm20b_secboot_generate_bl_desc(const struct hsf_load_header *load_hdr, + void *_bl_desc, u64 offset) { - struct gm20b_flcn_bl_desc *gdesc = ret; - u64 addr; - - memcpy(gdesc->reserved, desc->reserved, sizeof(gdesc->reserved)); - memcpy(gdesc->signature, desc->signature, sizeof(gdesc->signature)); - gdesc->ctx_dma = desc->ctx_dma; - addr = desc->code_dma_base.hi; - addr <<= 32; - addr |= desc->code_dma_base.lo; - gdesc->code_dma_base = lower_32_bits(addr >> 8); - gdesc->non_sec_code_off = desc->non_sec_code_off; - gdesc->non_sec_code_size = desc->non_sec_code_size; - gdesc->sec_code_off = desc->sec_code_off; - gdesc->sec_code_size = desc->sec_code_size; - gdesc->code_entry_point = desc->code_entry_point; - addr = desc->data_dma_base.hi; - addr <<= 32; - addr |= desc->data_dma_base.lo; - gdesc->data_dma_base = lower_32_bits(addr >> 8); - gdesc->data_size = desc->data_size; + struct gm20b_flcn_bl_desc *bl_desc = _bl_desc; + + memset(bl_desc, 0, sizeof(*bl_desc)); + bl_desc->ctx_dma = FALCON_DMAIDX_VIRT; + bl_desc->non_sec_code_off = load_hdr->non_sec_code_off; + bl_desc->non_sec_code_size = load_hdr->non_sec_code_size; + bl_desc->sec_code_off = load_hdr->app[0].sec_code_off; + bl_desc->sec_code_size = load_hdr->app[0].sec_code_size; + bl_desc->code_entry_point = 0; + bl_desc->code_dma_base = offset >> 8; + bl_desc->data_dma_base = (offset + load_hdr->data_dma_base) >> 8; + bl_desc->data_size = load_hdr->data_size; } static const struct gm200_secboot_func gm20b_secboot_func = { .bl_desc_size = sizeof(struct gm20b_flcn_bl_desc), - .fixup_bl_desc = gm20b_secboot_fixup_bl_desc, + .generate_bl_desc = gm20b_secboot_generate_bl_desc, .prepare_blobs = gm20b_secboot_prepare_blobs, }; diff --git a/drm/nouveau/nvkm/subdev/secboot/priv.h b/drm/nouveau/nvkm/subdev/secboot/priv.h index 2a4c4d5a3c90..1922422fd539 100644 --- a/drm/nouveau/nvkm/subdev/secboot/priv.h +++ b/drm/nouveau/nvkm/subdev/secboot/priv.h @@ -217,46 +217,39 @@ struct flcn_u64 { u32 lo; u32 hi; }; + static inline u64 flcn64_to_u64(const struct flcn_u64 f) { return ((u64)f.hi) << 32 | f.lo; } +static inline struct flcn_u64 u64_to_flcn64(u64 u) +{ + struct flcn_u64 ret; + + ret.hi = upper_32_bits(u); + ret.lo = lower_32_bits(u); + + return ret; +} + +#define GM200_ACR_MAX_APPS 8 + +struct hsf_load_header_app { + u32 sec_code_off; + u32 sec_code_size; +}; + /** - * struct gm200_flcn_bl_desc - DMEM bootloader descriptor - * @signature: 16B signature for secure code. 0s if no secure code - * @ctx_dma: DMA context to be used by BL while loading code/data - * @code_dma_base: 256B-aligned Physical FB Address where code is located - * (falcon's $xcbase register) - * @non_sec_code_off: offset from code_dma_base where the non-secure code is - * located. The offset must be multiple of 256 to help perf - * @non_sec_code_size: the size of the nonSecure code part. - * @sec_code_off: offset from code_dma_base where the secure code is - * located. The offset must be multiple of 256 to help perf - * @sec_code_size: offset from code_dma_base where the secure code is - * located. The offset must be multiple of 256 to help perf - * @code_entry_point: code entry point which will be invoked by BL after - * code is loaded. - * @data_dma_base: 256B aligned Physical FB Address where data is located. - * (falcon's $xdbase register) - * @data_size: size of data block. Should be multiple of 256B - * - * Structure used by the bootloader to load the rest of the code. This has - * to be filled by host and copied into DMEM at offset provided in the - * hsflcn_bl_desc.bl_desc_dmem_load_off. + * struct hsf_load_header - HS firmware load header */ -struct gm200_flcn_bl_desc { - u32 reserved[4]; - u32 signature[4]; - u32 ctx_dma; - struct flcn_u64 code_dma_base; +struct hsf_load_header { u32 non_sec_code_off; u32 non_sec_code_size; - u32 sec_code_off; - u32 sec_code_size; - u32 code_entry_point; - struct flcn_u64 data_dma_base; + u32 data_dma_base; u32 data_size; + u32 num_apps; + struct hsf_load_header_app app[0]; }; /** @@ -322,11 +315,17 @@ struct gm200_secboot { * on Tegra the HS FW copies the LS blob into the fixed WPR instead */ struct nvkm_gpuobj *acr_load_blob; - struct gm200_flcn_bl_desc acr_load_bl_desc; + struct { + struct hsf_load_header load_bl_header; + struct hsf_load_header_app __load_apps[GM200_ACR_MAX_APPS]; + }; /* HS FW - unlock WPR region (dGPU only) */ struct nvkm_gpuobj *acr_unload_blob; - struct gm200_flcn_bl_desc acr_unload_bl_desc; + struct { + struct hsf_load_header unload_bl_header; + struct hsf_load_header_app __unload_apps[GM200_ACR_MAX_APPS]; + }; /* HS bootloader */ void *hsbl_blob; @@ -356,9 +355,9 @@ struct gm200_secboot { /** * Contains functions we wish to abstract between GM200-like implementations * @bl_desc_size: size of the BL descriptor used by this chip. - * @fixup_bl_desc: hook that generates the proper BL descriptor format from - * the generic GM200 format into a data array of size - * bl_desc_size + * @generate_bl_desc: hook that generates the proper BL descriptor format from + * the hsf_load_header format into a preallocated array of + * size bl_desc_size * @prepare_blobs: prepares the various blobs needed for secure booting */ struct gm200_secboot_func { @@ -368,7 +367,7 @@ struct gm200_secboot_func { * callback is called on it */ u32 bl_desc_size; - void (*fixup_bl_desc)(const struct gm200_flcn_bl_desc *, void *); + void (*generate_bl_desc)(const struct hsf_load_header *, void *, u64); int (*prepare_blobs)(struct gm200_secboot *); }; -- git-series 0.8.10
Alexandre Courbot
2016-Nov-21 08:29 UTC
[Nouveau] [PATCH v4 9/33] secboot: add LS flags to LS func structure
Add a flag that can be set when declaring how a LS firmware should be loaded. This allows us to remove falcon-specific code in the loader. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/nvkm/subdev/secboot/acr_r352.c | 10 ++++------ drm/nouveau/nvkm/subdev/secboot/acr_r352.h | 2 ++ drm/nouveau/nvkm/subdev/secboot/acr_r361.c | 2 ++ 3 files changed, 8 insertions(+), 6 deletions(-) diff --git a/drm/nouveau/nvkm/subdev/secboot/acr_r352.c b/drm/nouveau/nvkm/subdev/secboot/acr_r352.c index 5622ae9c1a1e..716e9d915765 100644 --- a/drm/nouveau/nvkm/subdev/secboot/acr_r352.c +++ b/drm/nouveau/nvkm/subdev/secboot/acr_r352.c @@ -265,13 +265,9 @@ ls_ucode_img_fill_headers(struct acr_r352 *acr, struct ls_ucode_img *img, desc->app_resident_data_offset; lhdr->app_data_size = desc->app_resident_data_size; - lhdr->flags = 0; + lhdr->flags = func->lhdr_flags; if (img->falcon_id == acr->base.boot_falcon) - lhdr->flags = LSF_FLAG_DMACTL_REQ_CTX; - - /* GPCCS will be loaded using PRI */ - if (img->falcon_id == NVKM_FALCON_GPCCS) - lhdr->flags |= LSF_FLAG_FORCE_PRIV_LOAD; + lhdr->flags |= LSF_FLAG_DMACTL_REQ_CTX; /* Align and save off BL descriptor size */ lhdr->bl_data_size = ALIGN(func->bl_desc_size, LSF_BL_DATA_SIZE_ALIGN); @@ -866,6 +862,8 @@ acr_r352_ls_gpccs_func = { .load = acr_ls_ucode_load_gpccs, .generate_bl_desc = acr_r352_generate_flcn_bl_desc, .bl_desc_size = sizeof(struct acr_r352_flcn_bl_desc), + /* GPCCS will be loaded using PRI */ + .lhdr_flags = LSF_FLAG_FORCE_PRIV_LOAD, }; const struct acr_r352_func diff --git a/drm/nouveau/nvkm/subdev/secboot/acr_r352.h b/drm/nouveau/nvkm/subdev/secboot/acr_r352.h index 38ac2a73f585..d54deea763a1 100644 --- a/drm/nouveau/nvkm/subdev/secboot/acr_r352.h +++ b/drm/nouveau/nvkm/subdev/secboot/acr_r352.h @@ -52,12 +52,14 @@ struct hsf_load_header { * @generate_bl_desc: function called on a block of bl_desc_size to generate the * proper bootloader descriptor for this LS firmware * @bl_desc_size: size of the bootloader descriptor + * @lhdr_flags: LS flags */ struct acr_r352_ls_func { int (*load)(const struct nvkm_subdev *, struct ls_ucode_img *); void (*generate_bl_desc)(const struct nvkm_acr *, const struct ls_ucode_img *, u64, void *); u32 bl_desc_size; + u32 lhdr_flags; }; /** diff --git a/drm/nouveau/nvkm/subdev/secboot/acr_r361.c b/drm/nouveau/nvkm/subdev/secboot/acr_r361.c index d2c01af50d2e..9373a724f87e 100644 --- a/drm/nouveau/nvkm/subdev/secboot/acr_r361.c +++ b/drm/nouveau/nvkm/subdev/secboot/acr_r361.c @@ -113,6 +113,8 @@ acr_r361_ls_gpccs_func = { .load = acr_ls_ucode_load_gpccs, .generate_bl_desc = acr_r361_generate_flcn_bl_desc, .bl_desc_size = sizeof(struct acr_r361_flcn_bl_desc), + /* GPCCS will be loaded using PRI */ + .lhdr_flags = LSF_FLAG_FORCE_PRIV_LOAD, }; const struct acr_r352_func -- git-series 0.8.10
Alexandre Courbot
2016-Nov-21 08:29 UTC
[Nouveau] [PATCH v4 10/33] secboot: split reset function
Split the reset function into more meaningful and reusable ones. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/include/nvkm/subdev/secboot.h | 3 +- drm/nouveau/nvkm/subdev/secboot/acr_r352.c | 78 +++++++++++++++-------- 2 files changed, 56 insertions(+), 25 deletions(-) diff --git a/drm/nouveau/include/nvkm/subdev/secboot.h b/drm/nouveau/include/nvkm/subdev/secboot.h index d93161090233..2da7b05c0114 100644 --- a/drm/nouveau/include/nvkm/subdev/secboot.h +++ b/drm/nouveau/include/nvkm/subdev/secboot.h @@ -29,6 +29,7 @@ /** * @base: base IO address of the falcon performing secure boot * @debug_mode: whether the debug or production signatures should be used + * @wpr_set: whether the WPR region is currently set */ struct nvkm_secboot { const struct nvkm_secboot_func *func; @@ -42,6 +43,8 @@ struct nvkm_secboot { u32 wpr_size; bool debug_mode; + + bool wpr_set; }; #define nvkm_secboot(p) container_of((p), struct nvkm_secboot, subdev) diff --git a/drm/nouveau/nvkm/subdev/secboot/acr_r352.c b/drm/nouveau/nvkm/subdev/secboot/acr_r352.c index 716e9d915765..3320c96cb0a3 100644 --- a/drm/nouveau/nvkm/subdev/secboot/acr_r352.c +++ b/drm/nouveau/nvkm/subdev/secboot/acr_r352.c @@ -744,6 +744,54 @@ acr_r352_load(struct nvkm_acr *_acr, struct nvkm_secboot *sb, return 0; } +static int +acr_r352_shutdown(struct acr_r352 *acr, struct nvkm_secboot *sb) +{ + int i; + + /* Run the unload blob to unprotect the WPR region */ + if (acr->unload_blob && sb->wpr_set) { + int ret; + + nvkm_debug(&sb->subdev, "running HS unload blob\n"); + ret = sb->func->run_blob(sb, acr->unload_blob); + if (ret) + return ret; + nvkm_debug(&sb->subdev, "HS unload blob completed\n"); + } + + for (i = 0; i < NVKM_FALCON_END; i++) + acr->falcon_state[i] = NON_SECURE; + + sb->wpr_set = false; + + return 0; +} + +static int +acr_r352_bootstrap(struct acr_r352 *acr, struct nvkm_secboot *sb) +{ + int ret; + + if (sb->wpr_set) + return 0; + + /* Make sure all blobs are ready */ + ret = acr_r352_load_blobs(acr, sb); + if (ret) + return ret; + + nvkm_debug(&sb->subdev, "running HS load blob\n"); + ret = sb->func->run_blob(sb, acr->load_blob); + if (ret) + return ret; + nvkm_debug(&sb->subdev, "HS load blob completed\n"); + + sb->wpr_set = true; + + return 0; +} + /* * acr_r352_reset() - execute secure boot from the prepared state * @@ -758,11 +806,6 @@ acr_r352_reset(struct nvkm_acr *_acr, struct nvkm_secboot *sb, struct acr_r352 *acr = acr_r352(_acr); int ret; - /* Make sure all blobs are ready */ - ret = acr_r352_load_blobs(acr, sb); - if (ret) - return ret; - /* * Dummy GM200 implementation: perform secure boot each time we are * called on FECS. Since only FECS and GPCCS are managed and started @@ -774,16 +817,11 @@ acr_r352_reset(struct nvkm_acr *_acr, struct nvkm_secboot *sb, if (falcon != NVKM_FALCON_FECS) goto end; - /* If WPR is set and we have an unload blob, run it to unlock WPR */ - if (acr->unload_blob && - acr->falcon_state[NVKM_FALCON_FECS] != NON_SECURE) { - ret = sb->func->run_blob(sb, acr->unload_blob); - if (ret) - return ret; - } + ret = acr_r352_shutdown(acr, sb); + if (ret) + return ret; - /* Reload all managed falcons */ - ret = sb->func->run_blob(sb, acr->load_blob); + acr_r352_bootstrap(acr, sb); if (ret) return ret; @@ -822,18 +860,8 @@ static int acr_r352_fini(struct nvkm_acr *_acr, struct nvkm_secboot *sb, bool suspend) { struct acr_r352 *acr = acr_r352(_acr); - int ret = 0; - int i; - /* Run the unload blob to unprotect the WPR region */ - if (acr->unload_blob && - acr->falcon_state[NVKM_FALCON_FECS] != NON_SECURE) - ret = sb->func->run_blob(sb, acr->unload_blob); - - for (i = 0; i < NVKM_FALCON_END; i++) - acr->falcon_state[i] = NON_SECURE; - - return ret; + return acr_r352_shutdown(acr, sb); } static void -- git-series 0.8.10
Alexandre Courbot
2016-Nov-21 08:29 UTC
[Nouveau] [PATCH v4 11/33] secboot: disable falcon interrupts before running
Make sure we are not disturbed by spurious interrupts, as we poll the halt bit anyway. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/nvkm/subdev/secboot/gm200.c | 4 ++++ 1 file changed, 4 insertions(+), 0 deletions(-) diff --git a/drm/nouveau/nvkm/subdev/secboot/gm200.c b/drm/nouveau/nvkm/subdev/secboot/gm200.c index c88895f90db8..3239a2723e70 100644 --- a/drm/nouveau/nvkm/subdev/secboot/gm200.c +++ b/drm/nouveau/nvkm/subdev/secboot/gm200.c @@ -26,6 +26,7 @@ #include <core/gpuobj.h> #include <subdev/fb.h> +#include <subdev/mc.h> /** * gm200_secboot_setup_falcon() - set up the secure falcon for secure boot @@ -99,6 +100,9 @@ gm200_secboot_run_blob(struct nvkm_secboot *sb, struct nvkm_gpuobj *blob) if (ret) goto done; + /* Disable interrupts as we will poll for the HALT bit */ + nvkm_mc_intr_mask(sb->subdev.device, sb->devidx, false); + /* Start the HS bootloader */ ret = nvkm_secboot_falcon_run(sb); if (ret) -- git-series 0.8.10
Alexandre Courbot
2016-Nov-21 08:29 UTC
[Nouveau] [PATCH v4 12/33] secboot: remove unneeded ls_ucode_img member
ucode_header is not used anywhere, so just get rid of it. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/nvkm/subdev/secboot/acr_r352.c | 25 +++++--------------- drm/nouveau/nvkm/subdev/secboot/ls_ucode.h | 2 +-- drm/nouveau/nvkm/subdev/secboot/ls_ucode_gr.c | 2 +-- 3 files changed, 7 insertions(+), 22 deletions(-) diff --git a/drm/nouveau/nvkm/subdev/secboot/acr_r352.c b/drm/nouveau/nvkm/subdev/secboot/acr_r352.c index 3320c96cb0a3..36368545d693 100644 --- a/drm/nouveau/nvkm/subdev/secboot/acr_r352.c +++ b/drm/nouveau/nvkm/subdev/secboot/acr_r352.c @@ -214,12 +214,6 @@ ls_ucode_img_fill_headers(struct acr_r352 *acr, struct ls_ucode_img *img, const struct acr_r352_ls_func *func acr->func->ls_func[img->falcon_id]; - if (img->ucode_header) { - nvkm_fatal(acr->base.subdev, - "images withough loader are not supported yet!\n"); - return offset; - } - /* Fill WPR header */ whdr->falcon_id = img->falcon_id; whdr->bootstrap_owner = acr->base.boot_falcon; @@ -308,7 +302,6 @@ ls_ucode_mgr_cleanup(struct ls_ucode_mgr *mgr) list_for_each_entry_safe(img, t, &mgr->img_list, node) { kfree(img->ucode_data); - kfree(img->ucode_header); kfree(img); } } @@ -361,6 +354,10 @@ ls_ucode_mgr_write_wpr(struct acr_r352 *acr, struct ls_ucode_mgr *mgr, nvkm_kmap(wpr_blob); list_for_each_entry(img, &mgr->img_list, node) { + const struct acr_r352_ls_func *ls_func + acr->func->ls_func[img->falcon_id]; + u8 gdesc[ls_func->bl_desc_size]; + nvkm_gpuobj_memcpy_to(wpr_blob, pos, &img->wpr_header, sizeof(img->wpr_header)); @@ -368,18 +365,10 @@ ls_ucode_mgr_write_wpr(struct acr_r352 *acr, struct ls_ucode_mgr *mgr, &img->lsb_header, sizeof(img->lsb_header)); /* Generate and write BL descriptor */ - if (!img->ucode_header) { - const struct acr_r352_ls_func *ls_func - acr->func->ls_func[img->falcon_id]; - u8 gdesc[ls_func->bl_desc_size]; - - ls_func->generate_bl_desc(&acr->base, img, wpr_addr, - gdesc); + ls_func->generate_bl_desc(&acr->base, img, wpr_addr, gdesc); - nvkm_gpuobj_memcpy_to(wpr_blob, - img->lsb_header.bl_data_off, - gdesc, ls_func->bl_desc_size); - } + nvkm_gpuobj_memcpy_to(wpr_blob, img->lsb_header.bl_data_off, + gdesc, ls_func->bl_desc_size); /* Copy ucode */ nvkm_gpuobj_memcpy_to(wpr_blob, img->lsb_header.ucode_off, diff --git a/drm/nouveau/nvkm/subdev/secboot/ls_ucode.h b/drm/nouveau/nvkm/subdev/secboot/ls_ucode.h index 0518371a287c..3d8c42e11847 100644 --- a/drm/nouveau/nvkm/subdev/secboot/ls_ucode.h +++ b/drm/nouveau/nvkm/subdev/secboot/ls_ucode.h @@ -173,7 +173,6 @@ struct ls_ucode_img_desc { * @node: to link within lsf_ucode_mgr * @falcon_id: ID of the falcon this LS firmware is for * @ucode_desc: loaded or generated map of ucode_data - * @ucode_header: header of the firmware * @ucode_data: firmware payload (code and data) * @ucode_size: size in bytes of data in ucode_data * @wpr_header: WPR header to be written to the LS blob @@ -188,7 +187,6 @@ struct ls_ucode_img { enum nvkm_falconidx falcon_id; struct ls_ucode_img_desc ucode_desc; - u32 *ucode_header; u8 *ucode_data; u32 ucode_size; diff --git a/drm/nouveau/nvkm/subdev/secboot/ls_ucode_gr.c b/drm/nouveau/nvkm/subdev/secboot/ls_ucode_gr.c index 09f5f1f1a50d..1c32cb0f16f9 100644 --- a/drm/nouveau/nvkm/subdev/secboot/ls_ucode_gr.c +++ b/drm/nouveau/nvkm/subdev/secboot/ls_ucode_gr.c @@ -98,8 +98,6 @@ ls_ucode_img_load_gr(const struct nvkm_subdev *subdev, struct ls_ucode_img *img, char f[64]; int ret; - img->ucode_header = NULL; - snprintf(f, sizeof(f), "gr/%s_bl", falcon_name); ret = nvkm_firmware_get(subdev->device, f, &bl); if (ret) -- git-series 0.8.10
Alexandre Courbot
2016-Nov-21 08:29 UTC
[Nouveau] [PATCH v4 13/33] secboot: remove ls_ucode_mgr
This was used only locally to one function and can be replaced by ad-hoc variables. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/nvkm/subdev/secboot/acr_r352.c | 90 ++++++++--------------- 1 file changed, 33 insertions(+), 57 deletions(-) diff --git a/drm/nouveau/nvkm/subdev/secboot/acr_r352.c b/drm/nouveau/nvkm/subdev/secboot/acr_r352.c index 36368545d693..1025f55b4310 100644 --- a/drm/nouveau/nvkm/subdev/secboot/acr_r352.c +++ b/drm/nouveau/nvkm/subdev/secboot/acr_r352.c @@ -277,75 +277,44 @@ ls_ucode_img_fill_headers(struct acr_r352 *acr, struct ls_ucode_img *img, } /** - * struct ls_ucode_mgr - manager for all LS falcon firmwares - * @count: number of managed LS falcons - * @wpr_size: size of the required WPR region in bytes - * @img_list: linked list of lsf_ucode_img + * acr_r352_ls_fill_headers - fill WPR and LSB headers of all managed images */ -struct ls_ucode_mgr { - u16 count; - u32 wpr_size; - struct list_head img_list; -}; - -static void -ls_ucode_mgr_init(struct ls_ucode_mgr *mgr) -{ - memset(mgr, 0, sizeof(*mgr)); - INIT_LIST_HEAD(&mgr->img_list); -} - -static void -ls_ucode_mgr_cleanup(struct ls_ucode_mgr *mgr) -{ - struct ls_ucode_img *img, *t; - - list_for_each_entry_safe(img, t, &mgr->img_list, node) { - kfree(img->ucode_data); - kfree(img); - } -} - -static void -ls_ucode_mgr_add_img(struct ls_ucode_mgr *mgr, struct ls_ucode_img *img) -{ - mgr->count++; - list_add_tail(&img->node, &mgr->img_list); -} - -/** - * ls_ucode_mgr_fill_headers - fill WPR and LSB headers of all managed images - */ -static void -ls_ucode_mgr_fill_headers(struct acr_r352 *acr, struct ls_ucode_mgr *mgr) +static int +acr_r352_ls_fill_headers(struct acr_r352 *acr, struct list_head *imgs) { struct ls_ucode_img *img; + struct list_head *l; + u32 count = 0; u32 offset; + /* Count the number of images to manage */ + list_for_each(l, imgs) + count++; + /* * Start with an array of WPR headers at the base of the WPR. * The expectation here is that the secure falcon will do a single DMA * read of this array and cache it internally so it's ok to pack these. * Also, we add 1 to the falcon count to indicate the end of the array. */ - offset = sizeof(struct lsf_wpr_header) * (mgr->count + 1); + offset = sizeof(struct lsf_wpr_header) * (count + 1); /* * Walk the managed falcons, accounting for the LSB structs * as well as the ucode images. */ - list_for_each_entry(img, &mgr->img_list, node) { + list_for_each_entry(img, imgs, node) { offset = ls_ucode_img_fill_headers(acr, img, offset); } - mgr->wpr_size = offset; + return offset; } /** * ls_ucode_mgr_write_wpr - write the WPR blob contents */ static int -ls_ucode_mgr_write_wpr(struct acr_r352 *acr, struct ls_ucode_mgr *mgr, +ls_ucode_mgr_write_wpr(struct acr_r352 *acr, struct list_head *imgs, struct nvkm_gpuobj *wpr_blob, u32 wpr_addr) { struct ls_ucode_img *img; @@ -353,7 +322,7 @@ ls_ucode_mgr_write_wpr(struct acr_r352 *acr, struct ls_ucode_mgr *mgr, nvkm_kmap(wpr_blob); - list_for_each_entry(img, &mgr->img_list, node) { + list_for_each_entry(img, imgs, node) { const struct acr_r352_ls_func *ls_func acr->func->ls_func[img->falcon_id]; u8 gdesc[ls_func->bl_desc_size]; @@ -398,12 +367,15 @@ static int acr_r352_prepare_ls_blob(struct acr_r352 *acr, u64 wpr_addr, u32 wpr_size) { const struct nvkm_subdev *subdev = acr->base.subdev; - struct ls_ucode_mgr mgr; + struct list_head imgs; + struct ls_ucode_img *img, *t; unsigned long managed_falcons = acr->base.managed_falcons; + int managed_count = 0; + u32 image_wpr_size; int falcon_id; int ret; - ls_ucode_mgr_init(&mgr); + INIT_LIST_HEAD(&imgs); /* Load all LS blobs */ for_each_set_bit(falcon_id, &managed_falcons, NVKM_FALCON_END) { @@ -416,48 +388,52 @@ acr_r352_prepare_ls_blob(struct acr_r352 *acr, u64 wpr_addr, u32 wpr_size) ret = PTR_ERR(img); goto cleanup; } - ls_ucode_mgr_add_img(&mgr, img); + list_add_tail(&img->node, &imgs); + managed_count++; } /* * Fill the WPR and LSF headers with the right offsets and compute * required WPR size */ - ls_ucode_mgr_fill_headers(acr, &mgr); - mgr.wpr_size = ALIGN(mgr.wpr_size, WPR_ALIGNMENT); + image_wpr_size = acr_r352_ls_fill_headers(acr, &imgs); + image_wpr_size = ALIGN(image_wpr_size, WPR_ALIGNMENT); /* Allocate GPU object that will contain the WPR region */ - ret = nvkm_gpuobj_new(subdev->device, mgr.wpr_size, WPR_ALIGNMENT, + ret = nvkm_gpuobj_new(subdev->device, image_wpr_size, WPR_ALIGNMENT, false, NULL, &acr->ls_blob); if (ret) goto cleanup; nvkm_debug(subdev, "%d managed LS falcons, WPR size is %d bytes\n", - mgr.count, mgr.wpr_size); + managed_count, image_wpr_size); /* If WPR address and size are not fixed, set them to fit the LS blob */ if (wpr_size == 0) { wpr_addr = acr->ls_blob->addr; - wpr_size = mgr.wpr_size; + wpr_size = image_wpr_size; /* * But if the WPR region is set by the bootloader, it is illegal for * the HS blob to be larger than this region. */ - } else if (mgr.wpr_size > wpr_size) { + } else if (image_wpr_size > wpr_size) { nvkm_error(subdev, "WPR region too small for FW blob!\n"); - nvkm_error(subdev, "required: %dB\n", mgr.wpr_size); + nvkm_error(subdev, "required: %dB\n", image_wpr_size); nvkm_error(subdev, "available: %dB\n", wpr_size); ret = -ENOSPC; goto cleanup; } /* Write LS blob */ - ret = ls_ucode_mgr_write_wpr(acr, &mgr, acr->ls_blob, wpr_addr); + ret = ls_ucode_mgr_write_wpr(acr, &imgs, acr->ls_blob, wpr_addr); if (ret) nvkm_gpuobj_del(&acr->ls_blob); cleanup: - ls_ucode_mgr_cleanup(&mgr); + list_for_each_entry_safe(img, t, &imgs, node) { + kfree(img->ucode_data); + kfree(img); + } return ret; } -- git-series 0.8.10
Alexandre Courbot
2016-Nov-21 08:29 UTC
[Nouveau] [PATCH v4 14/33] secboot: abstract LS firmware loading functions
The WPR and LSB headers, used to generate the LS blob, may have a different layout and sizes depending on the driver version they come from. Abstract them and confine their use to driver-specific code. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/nvkm/subdev/secboot/acr_r352.c | 102 +++++++++------- drm/nouveau/nvkm/subdev/secboot/acr_r352.h | 119 +++++++++++++++++++- drm/nouveau/nvkm/subdev/secboot/acr_r361.c | 9 +- drm/nouveau/nvkm/subdev/secboot/ls_ucode.h | 100 +---------------- drm/nouveau/nvkm/subdev/secboot/ls_ucode_gr.c | 39 ++---- 5 files changed, 208 insertions(+), 161 deletions(-) diff --git a/drm/nouveau/nvkm/subdev/secboot/acr_r352.c b/drm/nouveau/nvkm/subdev/secboot/acr_r352.c index 1025f55b4310..a552b55eadb8 100644 --- a/drm/nouveau/nvkm/subdev/secboot/acr_r352.c +++ b/drm/nouveau/nvkm/subdev/secboot/acr_r352.c @@ -21,7 +21,6 @@ */ #include "acr_r352.h" -#include "ls_ucode.h" #include <core/gpuobj.h> #include <core/firmware.h> @@ -93,11 +92,12 @@ struct acr_r352_flcn_bl_desc { */ static void acr_r352_generate_flcn_bl_desc(const struct nvkm_acr *acr, - const struct ls_ucode_img *img, u64 wpr_addr, + const struct ls_ucode_img *_img, u64 wpr_addr, void *_desc) { + struct ls_ucode_img_r352 *img = ls_ucode_img_r352(_img); struct acr_r352_flcn_bl_desc *desc = _desc; - const struct ls_ucode_img_desc *pdesc = &img->ucode_desc; + const struct ls_ucode_img_desc *pdesc = &_img->ucode_desc; u64 base, addr_code, addr_data; base = wpr_addr + img->lsb_header.ucode_off + pdesc->app_start_offset; @@ -162,29 +162,46 @@ struct hsflcn_acr_desc { * Low-secure blob creation */ -typedef int (*lsf_load_func)(const struct nvkm_subdev *, struct ls_ucode_img *); - /** * ls_ucode_img_load() - create a lsf_ucode_img and load it */ -static struct ls_ucode_img * -ls_ucode_img_load(const struct nvkm_subdev *subdev, lsf_load_func load_func) +struct ls_ucode_img * +acr_r352_ls_ucode_img_load(const struct acr_r352 *acr, + enum nvkm_falconidx falcon_id) { - struct ls_ucode_img *img; + const struct nvkm_subdev *subdev = acr->base.subdev; + struct ls_ucode_img_r352 *img; int ret; img = kzalloc(sizeof(*img), GFP_KERNEL); if (!img) return ERR_PTR(-ENOMEM); - ret = load_func(subdev, img); + img->base.falcon_id = falcon_id; + + ret = acr->func->ls_func[falcon_id]->load(subdev, &img->base); if (ret) { + kfree(img->base.ucode_data); + kfree(img->base.sig); kfree(img); return ERR_PTR(ret); } - return img; + /* Check that the signature size matches our expectations... */ + if (img->base.sig_size != sizeof(img->lsb_header.signature)) { + nvkm_error(subdev, "invalid signature size for %s falcon!\n", + nvkm_falcon_name[falcon_id]); + return ERR_PTR(-EINVAL); + } + + /* Copy signature to the right place */ + memcpy(&img->lsb_header.signature, img->base.sig, img->base.sig_size); + + /* not needed? the signature should already have the right value */ + img->lsb_header.signature.falcon_id = falcon_id; + + return &img->base; } #define LSF_LSB_HEADER_ALIGN 256 @@ -194,7 +211,7 @@ ls_ucode_img_load(const struct nvkm_subdev *subdev, lsf_load_func load_func) #define LSF_UCODE_DATA_ALIGN 4096 /** - * ls_ucode_img_fill_headers - fill the WPR and LSB headers of an image + * acr_r352_ls_img_fill_headers - fill the WPR and LSB headers of an image * @acr: ACR to use * @img: image to generate for * @offset: offset in the WPR region where this image starts @@ -205,24 +222,25 @@ ls_ucode_img_load(const struct nvkm_subdev *subdev, lsf_load_func load_func) * Return: offset at the end of this image. */ static u32 -ls_ucode_img_fill_headers(struct acr_r352 *acr, struct ls_ucode_img *img, - u32 offset) +acr_r352_ls_img_fill_headers(struct acr_r352 *acr, + struct ls_ucode_img_r352 *img, u32 offset) { - struct lsf_wpr_header *whdr = &img->wpr_header; - struct lsf_lsb_header *lhdr = &img->lsb_header; - struct ls_ucode_img_desc *desc = &img->ucode_desc; + struct ls_ucode_img *_img = &img->base; + struct acr_r352_lsf_wpr_header *whdr = &img->wpr_header; + struct acr_r352_lsf_lsb_header *lhdr = &img->lsb_header; + struct ls_ucode_img_desc *desc = &_img->ucode_desc; const struct acr_r352_ls_func *func - acr->func->ls_func[img->falcon_id]; + acr->func->ls_func[_img->falcon_id]; /* Fill WPR header */ - whdr->falcon_id = img->falcon_id; + whdr->falcon_id = _img->falcon_id; whdr->bootstrap_owner = acr->base.boot_falcon; whdr->status = LSF_IMAGE_STATUS_COPY; /* Align, save off, and include an LSB header size */ offset = ALIGN(offset, LSF_LSB_HEADER_ALIGN); whdr->lsb_offset = offset; - offset += sizeof(struct lsf_lsb_header); + offset += sizeof(*lhdr); /* * Align, save off, and include the original (static) ucode @@ -230,7 +248,7 @@ ls_ucode_img_fill_headers(struct acr_r352 *acr, struct ls_ucode_img *img, */ offset = ALIGN(offset, LSF_UCODE_DATA_ALIGN); lhdr->ucode_off = offset; - offset += img->ucode_size; + offset += _img->ucode_size; /* * For falcons that use a boot loader (BL), we append a loader @@ -260,7 +278,7 @@ ls_ucode_img_fill_headers(struct acr_r352 *acr, struct ls_ucode_img *img, lhdr->app_data_size = desc->app_resident_data_size; lhdr->flags = func->lhdr_flags; - if (img->falcon_id == acr->base.boot_falcon) + if (_img->falcon_id == acr->base.boot_falcon) lhdr->flags |= LSF_FLAG_DMACTL_REQ_CTX; /* Align and save off BL descriptor size */ @@ -279,10 +297,10 @@ ls_ucode_img_fill_headers(struct acr_r352 *acr, struct ls_ucode_img *img, /** * acr_r352_ls_fill_headers - fill WPR and LSB headers of all managed images */ -static int +int acr_r352_ls_fill_headers(struct acr_r352 *acr, struct list_head *imgs) { - struct ls_ucode_img *img; + struct ls_ucode_img_r352 *img; struct list_head *l; u32 count = 0; u32 offset; @@ -297,34 +315,35 @@ acr_r352_ls_fill_headers(struct acr_r352 *acr, struct list_head *imgs) * read of this array and cache it internally so it's ok to pack these. * Also, we add 1 to the falcon count to indicate the end of the array. */ - offset = sizeof(struct lsf_wpr_header) * (count + 1); + offset = sizeof(img->wpr_header) * (count + 1); /* * Walk the managed falcons, accounting for the LSB structs * as well as the ucode images. */ - list_for_each_entry(img, imgs, node) { - offset = ls_ucode_img_fill_headers(acr, img, offset); + list_for_each_entry(img, imgs, base.node) { + offset = acr_r352_ls_img_fill_headers(acr, img, offset); } return offset; } /** - * ls_ucode_mgr_write_wpr - write the WPR blob contents + * acr_r352_ls_write_wpr - write the WPR blob contents */ -static int -ls_ucode_mgr_write_wpr(struct acr_r352 *acr, struct list_head *imgs, - struct nvkm_gpuobj *wpr_blob, u32 wpr_addr) +int +acr_r352_ls_write_wpr(struct acr_r352 *acr, struct list_head *imgs, + struct nvkm_gpuobj *wpr_blob, u32 wpr_addr) { - struct ls_ucode_img *img; + struct ls_ucode_img *_img; u32 pos = 0; nvkm_kmap(wpr_blob); - list_for_each_entry(img, imgs, node) { + list_for_each_entry(_img, imgs, node) { + struct ls_ucode_img_r352 *img = ls_ucode_img_r352(_img); const struct acr_r352_ls_func *ls_func - acr->func->ls_func[img->falcon_id]; + acr->func->ls_func[_img->falcon_id]; u8 gdesc[ls_func->bl_desc_size]; nvkm_gpuobj_memcpy_to(wpr_blob, pos, &img->wpr_header, @@ -334,14 +353,14 @@ ls_ucode_mgr_write_wpr(struct acr_r352 *acr, struct list_head *imgs, &img->lsb_header, sizeof(img->lsb_header)); /* Generate and write BL descriptor */ - ls_func->generate_bl_desc(&acr->base, img, wpr_addr, gdesc); + ls_func->generate_bl_desc(&acr->base, _img, wpr_addr, gdesc); nvkm_gpuobj_memcpy_to(wpr_blob, img->lsb_header.bl_data_off, gdesc, ls_func->bl_desc_size); /* Copy ucode */ nvkm_gpuobj_memcpy_to(wpr_blob, img->lsb_header.ucode_off, - img->ucode_data, img->ucode_size); + _img->ucode_data, _img->ucode_size); pos += sizeof(img->wpr_header); } @@ -381,13 +400,12 @@ acr_r352_prepare_ls_blob(struct acr_r352 *acr, u64 wpr_addr, u32 wpr_size) for_each_set_bit(falcon_id, &managed_falcons, NVKM_FALCON_END) { struct ls_ucode_img *img; - img = ls_ucode_img_load(subdev, - acr->func->ls_func[falcon_id]->load); - + img = acr->func->ls_ucode_img_load(acr, falcon_id); if (IS_ERR(img)) { ret = PTR_ERR(img); goto cleanup; } + list_add_tail(&img->node, &imgs); managed_count++; } @@ -396,7 +414,7 @@ acr_r352_prepare_ls_blob(struct acr_r352 *acr, u64 wpr_addr, u32 wpr_size) * Fill the WPR and LSF headers with the right offsets and compute * required WPR size */ - image_wpr_size = acr_r352_ls_fill_headers(acr, &imgs); + image_wpr_size = acr->func->ls_fill_headers(acr, &imgs); image_wpr_size = ALIGN(image_wpr_size, WPR_ALIGNMENT); /* Allocate GPU object that will contain the WPR region */ @@ -425,13 +443,14 @@ acr_r352_prepare_ls_blob(struct acr_r352 *acr, u64 wpr_addr, u32 wpr_size) } /* Write LS blob */ - ret = ls_ucode_mgr_write_wpr(acr, &imgs, acr->ls_blob, wpr_addr); + ret = acr->func->ls_write_wpr(acr, &imgs, acr->ls_blob, wpr_addr); if (ret) nvkm_gpuobj_del(&acr->ls_blob); cleanup: list_for_each_entry_safe(img, t, &imgs, node) { kfree(img->ucode_data); + kfree(img->sig); kfree(img); } @@ -863,6 +882,9 @@ const struct acr_r352_func acr_r352_func = { .generate_hs_bl_desc = acr_r352_generate_hs_bl_desc, .hs_bl_desc_size = sizeof(struct acr_r352_flcn_bl_desc), + .ls_ucode_img_load = acr_r352_ls_ucode_img_load, + .ls_fill_headers = acr_r352_ls_fill_headers, + .ls_write_wpr = acr_r352_ls_write_wpr, .ls_func = { [NVKM_FALCON_FECS] = &acr_r352_ls_fecs_func, [NVKM_FALCON_GPCCS] = &acr_r352_ls_gpccs_func, diff --git a/drm/nouveau/nvkm/subdev/secboot/acr_r352.h b/drm/nouveau/nvkm/subdev/secboot/acr_r352.h index d54deea763a1..18dd3d95cc56 100644 --- a/drm/nouveau/nvkm/subdev/secboot/acr_r352.h +++ b/drm/nouveau/nvkm/subdev/secboot/acr_r352.h @@ -23,11 +23,116 @@ #define __NVKM_SECBOOT_ACR_R352_H__ #include "acr.h" +#include "ls_ucode.h" struct ls_ucode_img; #define ACR_R352_MAX_APPS 8 +/* + * + * LS blob structures + * + */ + +/** + * struct acr_r352_lsf_lsb_header - LS firmware header + * @signature: signature to verify the firmware against + * @ucode_off: offset of the ucode blob in the WPR region. The ucode + * blob contains the bootloader, code and data of the + * LS falcon + * @ucode_size: size of the ucode blob, including bootloader + * @data_size: size of the ucode blob data + * @bl_code_size: size of the bootloader code + * @bl_imem_off: offset in imem of the bootloader + * @bl_data_off: offset of the bootloader data in WPR region + * @bl_data_size: size of the bootloader data + * @app_code_off: offset of the app code relative to ucode_off + * @app_code_size: size of the app code + * @app_data_off: offset of the app data relative to ucode_off + * @app_data_size: size of the app data + * @flags: flags for the secure bootloader + * + * This structure is written into the WPR region for each managed falcon. Each + * instance is referenced by the lsb_offset member of the corresponding + * lsf_wpr_header. + */ +struct acr_r352_lsf_lsb_header { + /** + * LS falcon signatures + * @prd_keys: signature to use in production mode + * @dgb_keys: signature to use in debug mode + * @b_prd_present: whether the production key is present + * @b_dgb_present: whether the debug key is present + * @falcon_id: ID of the falcon the ucode applies to + */ + struct { + u8 prd_keys[2][16]; + u8 dbg_keys[2][16]; + u32 b_prd_present; + u32 b_dbg_present; + u32 falcon_id; + } signature; + u32 ucode_off; + u32 ucode_size; + u32 data_size; + u32 bl_code_size; + u32 bl_imem_off; + u32 bl_data_off; + u32 bl_data_size; + u32 app_code_off; + u32 app_code_size; + u32 app_data_off; + u32 app_data_size; + u32 flags; +#define LSF_FLAG_LOAD_CODE_AT_0 1 +#define LSF_FLAG_DMACTL_REQ_CTX 4 +#define LSF_FLAG_FORCE_PRIV_LOAD 8 +}; + +/** + * struct acr_r352_lsf_wpr_header - LS blob WPR Header + * @falcon_id: LS falcon ID + * @lsb_offset: offset of the lsb_lsf_header in the WPR region + * @bootstrap_owner: secure falcon reponsible for bootstrapping the LS falcon + * @lazy_bootstrap: skip bootstrapping by ACR + * @status: bootstrapping status + * + * An array of these is written at the beginning of the WPR region, one for + * each managed falcon. The array is terminated by an instance which falcon_id + * is LSF_FALCON_ID_INVALID. + */ +struct acr_r352_lsf_wpr_header { + u32 falcon_id; + u32 lsb_offset; + u32 bootstrap_owner; + u32 lazy_bootstrap; + u32 status; +#define LSF_IMAGE_STATUS_NONE 0 +#define LSF_IMAGE_STATUS_COPY 1 +#define LSF_IMAGE_STATUS_VALIDATION_CODE_FAILED 2 +#define LSF_IMAGE_STATUS_VALIDATION_DATA_FAILED 3 +#define LSF_IMAGE_STATUS_VALIDATION_DONE 4 +#define LSF_IMAGE_STATUS_VALIDATION_SKIPPED 5 +#define LSF_IMAGE_STATUS_BOOTSTRAP_READY 6 +}; + +/** + * struct ls_ucode_img_r352 - ucode image augmented with r352 headers + */ +struct ls_ucode_img_r352 { + struct ls_ucode_img base; + + struct acr_r352_lsf_wpr_header wpr_header; + struct acr_r352_lsf_lsb_header lsb_header; +}; +#define ls_ucode_img_r352(i) container_of(i, struct ls_ucode_img_r352, base) + + +/* + * HS blob structures + */ + struct hsf_load_header_app { u32 sec_code_off; u32 sec_code_size; @@ -62,6 +167,8 @@ struct acr_r352_ls_func { u32 lhdr_flags; }; +struct acr_r352; + /** * struct acr_r352_func - manages nuances between ACR versions * @@ -74,6 +181,12 @@ struct acr_r352_func { u64); u32 hs_bl_desc_size; + struct ls_ucode_img *(*ls_ucode_img_load)(const struct acr_r352 *, + enum nvkm_falconidx); + int (*ls_fill_headers)(struct acr_r352 *, struct list_head *); + int (*ls_write_wpr)(struct acr_r352 *, struct list_head *, + struct nvkm_gpuobj *, u32); + const struct acr_r352_ls_func *ls_func[NVKM_FALCON_END]; }; @@ -125,4 +238,10 @@ struct acr_r352 { struct nvkm_acr *acr_r352_new_(const struct acr_r352_func *, enum nvkm_falconidx, unsigned long); +struct ls_ucode_img *acr_r352_ls_ucode_img_load(const struct acr_r352 *, + enum nvkm_falconidx); +int acr_r352_ls_fill_headers(struct acr_r352 *, struct list_head *); +int acr_r352_ls_write_wpr(struct acr_r352 *, struct list_head *, + struct nvkm_gpuobj *, u32); + #endif diff --git a/drm/nouveau/nvkm/subdev/secboot/acr_r361.c b/drm/nouveau/nvkm/subdev/secboot/acr_r361.c index 9373a724f87e..d79ec7d38f9a 100644 --- a/drm/nouveau/nvkm/subdev/secboot/acr_r361.c +++ b/drm/nouveau/nvkm/subdev/secboot/acr_r361.c @@ -21,7 +21,6 @@ */ #include "acr_r352.h" -#include "ls_ucode.h" /** * struct acr_r361_flcn_bl_desc - DMEM bootloader descriptor @@ -62,11 +61,12 @@ struct acr_r361_flcn_bl_desc { static void acr_r361_generate_flcn_bl_desc(const struct nvkm_acr *acr, - const struct ls_ucode_img *img, u64 wpr_addr, + const struct ls_ucode_img *_img, u64 wpr_addr, void *_desc) { + struct ls_ucode_img_r352 *img = ls_ucode_img_r352(_img); struct acr_r361_flcn_bl_desc *desc = _desc; - const struct ls_ucode_img_desc *pdesc = &img->ucode_desc; + const struct ls_ucode_img_desc *pdesc = &img->base.ucode_desc; u64 base, addr_code, addr_data; base = wpr_addr + img->lsb_header.ucode_off + pdesc->app_start_offset; @@ -121,6 +121,9 @@ const struct acr_r352_func acr_r361_func = { .generate_hs_bl_desc = acr_r361_generate_hs_bl_desc, .hs_bl_desc_size = sizeof(struct acr_r361_flcn_bl_desc), + .ls_ucode_img_load = acr_r352_ls_ucode_img_load, + .ls_fill_headers = acr_r352_ls_fill_headers, + .ls_write_wpr = acr_r352_ls_write_wpr, .ls_func = { [NVKM_FALCON_FECS] = &acr_r361_ls_fecs_func, [NVKM_FALCON_GPCCS] = &acr_r361_ls_gpccs_func, diff --git a/drm/nouveau/nvkm/subdev/secboot/ls_ucode.h b/drm/nouveau/nvkm/subdev/secboot/ls_ucode.h index 3d8c42e11847..7f4292f740b5 100644 --- a/drm/nouveau/nvkm/subdev/secboot/ls_ucode.h +++ b/drm/nouveau/nvkm/subdev/secboot/ls_ucode.h @@ -27,98 +27,6 @@ #include <core/falcon.h> #include <core/subdev.h> -/* - * - * LS blob structures - * - */ - -/** - * struct lsf_ucode_desc - LS falcon signatures - * @prd_keys: signature to use when the GPU is in production mode - * @dgb_keys: signature to use when the GPU is in debug mode - * @b_prd_present: whether the production key is present - * @b_dgb_present: whether the debug key is present - * @falcon_id: ID of the falcon the ucode applies to - * - * Directly loaded from a signature file. - */ -struct lsf_ucode_desc { - u8 prd_keys[2][16]; - u8 dbg_keys[2][16]; - u32 b_prd_present; - u32 b_dbg_present; - u32 falcon_id; -}; - -/** - * struct lsf_lsb_header - LS firmware header - * @signature: signature to verify the firmware against - * @ucode_off: offset of the ucode blob in the WPR region. The ucode - * blob contains the bootloader, code and data of the - * LS falcon - * @ucode_size: size of the ucode blob, including bootloader - * @data_size: size of the ucode blob data - * @bl_code_size: size of the bootloader code - * @bl_imem_off: offset in imem of the bootloader - * @bl_data_off: offset of the bootloader data in WPR region - * @bl_data_size: size of the bootloader data - * @app_code_off: offset of the app code relative to ucode_off - * @app_code_size: size of the app code - * @app_data_off: offset of the app data relative to ucode_off - * @app_data_size: size of the app data - * @flags: flags for the secure bootloader - * - * This structure is written into the WPR region for each managed falcon. Each - * instance is referenced by the lsb_offset member of the corresponding - * lsf_wpr_header. - */ -struct lsf_lsb_header { - struct lsf_ucode_desc signature; - u32 ucode_off; - u32 ucode_size; - u32 data_size; - u32 bl_code_size; - u32 bl_imem_off; - u32 bl_data_off; - u32 bl_data_size; - u32 app_code_off; - u32 app_code_size; - u32 app_data_off; - u32 app_data_size; - u32 flags; -#define LSF_FLAG_LOAD_CODE_AT_0 1 -#define LSF_FLAG_DMACTL_REQ_CTX 4 -#define LSF_FLAG_FORCE_PRIV_LOAD 8 -}; - -/** - * struct lsf_wpr_header - LS blob WPR Header - * @falcon_id: LS falcon ID - * @lsb_offset: offset of the lsb_lsf_header in the WPR region - * @bootstrap_owner: secure falcon reponsible for bootstrapping the LS falcon - * @lazy_bootstrap: skip bootstrapping by ACR - * @status: bootstrapping status - * - * An array of these is written at the beginning of the WPR region, one for - * each managed falcon. The array is terminated by an instance which falcon_id - * is LSF_FALCON_ID_INVALID. - */ -struct lsf_wpr_header { - u32 falcon_id; - u32 lsb_offset; - u32 bootstrap_owner; - u32 lazy_bootstrap; - u32 status; -#define LSF_IMAGE_STATUS_NONE 0 -#define LSF_IMAGE_STATUS_COPY 1 -#define LSF_IMAGE_STATUS_VALIDATION_CODE_FAILED 2 -#define LSF_IMAGE_STATUS_VALIDATION_DATA_FAILED 3 -#define LSF_IMAGE_STATUS_VALIDATION_DONE 4 -#define LSF_IMAGE_STATUS_VALIDATION_SKIPPED 5 -#define LSF_IMAGE_STATUS_BOOTSTRAP_READY 6 -}; - /** * struct ls_ucode_img_desc - descriptor of firmware image @@ -175,8 +83,8 @@ struct ls_ucode_img_desc { * @ucode_desc: loaded or generated map of ucode_data * @ucode_data: firmware payload (code and data) * @ucode_size: size in bytes of data in ucode_data - * @wpr_header: WPR header to be written to the LS blob - * @lsb_header: LSB header to be written to the LS blob + * @sig: signature for this firmware + * @sig:size: size of the signature in bytes * * Preparing the WPR LS blob requires information about all the LS firmwares * (size, etc) to be known. This structure contains all the data of one LS @@ -190,8 +98,8 @@ struct ls_ucode_img { u8 *ucode_data; u32 ucode_size; - struct lsf_wpr_header wpr_header; - struct lsf_lsb_header lsb_header; + u8 *sig; + u32 sig_size; }; /** diff --git a/drm/nouveau/nvkm/subdev/secboot/ls_ucode_gr.c b/drm/nouveau/nvkm/subdev/secboot/ls_ucode_gr.c index 1c32cb0f16f9..40a6df77bb8a 100644 --- a/drm/nouveau/nvkm/subdev/secboot/ls_ucode_gr.c +++ b/drm/nouveau/nvkm/subdev/secboot/ls_ucode_gr.c @@ -91,10 +91,9 @@ ls_ucode_img_build(const struct firmware *bl, const struct firmware *code, */ static int ls_ucode_img_load_gr(const struct nvkm_subdev *subdev, struct ls_ucode_img *img, - const char *falcon_name, const u32 falcon_id) + const char *falcon_name) { - const struct firmware *bl, *code, *data; - struct lsf_ucode_desc *lsf_desc; + const struct firmware *bl, *code, *data, *sig; char f[64]; int ret; @@ -113,6 +112,17 @@ ls_ucode_img_load_gr(const struct nvkm_subdev *subdev, struct ls_ucode_img *img, if (ret) goto free_inst; + snprintf(f, sizeof(f), "gr/%s_sig", falcon_name); + ret = nvkm_firmware_get(subdev->device, f, &sig); + if (ret) + goto free_data; + img->sig = kmemdup(sig->data, sig->size, GFP_KERNEL); + if (!img->sig) { + ret = -ENOMEM; + goto free_sig; + } + img->sig_size = sig->size; + img->ucode_data = ls_ucode_img_build(bl, code, data, &img->ucode_desc); if (IS_ERR(img->ucode_data)) { @@ -121,23 +131,8 @@ ls_ucode_img_load_gr(const struct nvkm_subdev *subdev, struct ls_ucode_img *img, } img->ucode_size = img->ucode_desc.image_size; - snprintf(f, sizeof(f), "gr/%s_sig", falcon_name); - lsf_desc = nvkm_acr_load_firmware(subdev, f, sizeof(*lsf_desc)); - if (IS_ERR(lsf_desc)) { - ret = PTR_ERR(lsf_desc); - goto free_image; - } - /* not needed? the signature should already have the right value */ - lsf_desc->falcon_id = falcon_id; - memcpy(&img->lsb_header.signature, lsf_desc, sizeof(*lsf_desc)); - img->falcon_id = lsf_desc->falcon_id; - kfree(lsf_desc); - - /* success path - only free requested firmware files */ - goto free_data; - -free_image: - kfree(img->ucode_data); +free_sig: + nvkm_firmware_put(sig); free_data: nvkm_firmware_put(data); free_inst: @@ -152,12 +147,12 @@ int acr_ls_ucode_load_fecs(const struct nvkm_subdev *subdev, struct ls_ucode_img *img) { - return ls_ucode_img_load_gr(subdev, img, "fecs", NVKM_FALCON_FECS); + return ls_ucode_img_load_gr(subdev, img, "fecs"); } int acr_ls_ucode_load_gpccs(const struct nvkm_subdev *subdev, struct ls_ucode_img *img) { - return ls_ucode_img_load_gr(subdev, img, "gpccs", NVKM_FALCON_GPCCS); + return ls_ucode_img_load_gr(subdev, img, "gpccs"); } -- git-series 0.8.10
Alexandre Courbot
2016-Nov-21 08:29 UTC
[Nouveau] [PATCH v4 15/33] secboot: safer zeroing of BL descriptors
Perform the zeroing of BL descriptors in the caller function instead of trusting each generator will do it. This could avoid a few pulled hairs. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/nvkm/subdev/secboot/acr_r352.c | 4 ++-- drm/nouveau/nvkm/subdev/secboot/acr_r361.c | 2 -- 2 files changed, 2 insertions(+), 4 deletions(-) diff --git a/drm/nouveau/nvkm/subdev/secboot/acr_r352.c b/drm/nouveau/nvkm/subdev/secboot/acr_r352.c index a552b55eadb8..e8dd21983675 100644 --- a/drm/nouveau/nvkm/subdev/secboot/acr_r352.c +++ b/drm/nouveau/nvkm/subdev/secboot/acr_r352.c @@ -104,7 +104,6 @@ acr_r352_generate_flcn_bl_desc(const struct nvkm_acr *acr, addr_code = (base + pdesc->app_resident_code_offset) >> 8; addr_data = (base + pdesc->app_resident_data_offset) >> 8; - memset(desc, 0, sizeof(*desc)); desc->ctx_dma = FALCON_DMAIDX_UCODE; desc->code_dma_base = lower_32_bits(addr_code); desc->non_sec_code_off = pdesc->app_resident_code_offset; @@ -353,6 +352,7 @@ acr_r352_ls_write_wpr(struct acr_r352 *acr, struct list_head *imgs, &img->lsb_header, sizeof(img->lsb_header)); /* Generate and write BL descriptor */ + memset(gdesc, 0, ls_func->bl_desc_size); ls_func->generate_bl_desc(&acr->base, _img, wpr_addr, gdesc); nvkm_gpuobj_memcpy_to(wpr_blob, img->lsb_header.bl_data_off, @@ -514,7 +514,6 @@ acr_r352_generate_hs_bl_desc(const struct hsf_load_header *hdr, void *_bl_desc, struct acr_r352_flcn_bl_desc *bl_desc = _bl_desc; u64 addr_code, addr_data; - memset(bl_desc, 0, sizeof(*bl_desc)); addr_code = offset >> 8; addr_data = (offset + hdr->data_dma_base) >> 8; @@ -717,6 +716,7 @@ acr_r352_load(struct nvkm_acr *_acr, struct nvkm_secboot *sb, code_size, hsbl_desc->start_tag); /* Generate the BL header */ + memset(bl_desc, 0, bl_desc_size); acr->func->generate_hs_bl_desc(load_hdr, bl_desc, offset); /* diff --git a/drm/nouveau/nvkm/subdev/secboot/acr_r361.c b/drm/nouveau/nvkm/subdev/secboot/acr_r361.c index d79ec7d38f9a..4fcdcff292f8 100644 --- a/drm/nouveau/nvkm/subdev/secboot/acr_r361.c +++ b/drm/nouveau/nvkm/subdev/secboot/acr_r361.c @@ -73,7 +73,6 @@ acr_r361_generate_flcn_bl_desc(const struct nvkm_acr *acr, addr_code = base + pdesc->app_resident_code_offset; addr_data = base + pdesc->app_resident_data_offset; - memset(desc, 0, sizeof(*desc)); desc->ctx_dma = FALCON_DMAIDX_UCODE; desc->code_dma_base = u64_to_flcn64(addr_code); desc->non_sec_code_off = pdesc->app_resident_code_offset; @@ -89,7 +88,6 @@ acr_r361_generate_hs_bl_desc(const struct hsf_load_header *hdr, void *_bl_desc, { struct acr_r361_flcn_bl_desc *bl_desc = _bl_desc; - memset(bl_desc, 0, sizeof(*bl_desc)); bl_desc->ctx_dma = FALCON_DMAIDX_VIRT; bl_desc->code_dma_base = u64_to_flcn64(offset); bl_desc->non_sec_code_off = hdr->non_sec_code_off; -- git-series 0.8.10
Alexandre Courbot
2016-Nov-21 08:29 UTC
[Nouveau] [PATCH v4 16/33] secboot: add missing fields to BL structure
Since DMEM was initialized to zero, these fields went unnoticed. Add them for safety. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/nvkm/subdev/secboot/acr_r352.c | 4 ++++ 1 file changed, 4 insertions(+), 0 deletions(-) diff --git a/drm/nouveau/nvkm/subdev/secboot/acr_r352.c b/drm/nouveau/nvkm/subdev/secboot/acr_r352.c index e8dd21983675..cc999bd60007 100644 --- a/drm/nouveau/nvkm/subdev/secboot/acr_r352.c +++ b/drm/nouveau/nvkm/subdev/secboot/acr_r352.c @@ -85,6 +85,8 @@ struct acr_r352_flcn_bl_desc { u32 code_entry_point; u32 data_dma_base; u32 data_size; + u32 code_dma_base1; + u32 data_dma_base1; }; /** @@ -106,10 +108,12 @@ acr_r352_generate_flcn_bl_desc(const struct nvkm_acr *acr, desc->ctx_dma = FALCON_DMAIDX_UCODE; desc->code_dma_base = lower_32_bits(addr_code); + desc->code_dma_base1 = upper_32_bits(addr_code); desc->non_sec_code_off = pdesc->app_resident_code_offset; desc->non_sec_code_size = pdesc->app_resident_code_size; desc->code_entry_point = pdesc->app_imem_entry; desc->data_dma_base = lower_32_bits(addr_data); + desc->data_dma_base1 = upper_32_bits(addr_data); desc->data_size = pdesc->app_resident_data_size; } -- git-series 0.8.10
Alexandre Courbot
2016-Nov-21 08:29 UTC
[Nouveau] [PATCH v4 17/33] secboot: set default error value in error register
Set a default error value in the mailbox 0 register so we can catch cases where the secure boot binary fails early without being able to report anything. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/nvkm/subdev/secboot/gm200.c | 2 ++ 1 file changed, 2 insertions(+), 0 deletions(-) diff --git a/drm/nouveau/nvkm/subdev/secboot/gm200.c b/drm/nouveau/nvkm/subdev/secboot/gm200.c index 3239a2723e70..34064a4c177a 100644 --- a/drm/nouveau/nvkm/subdev/secboot/gm200.c +++ b/drm/nouveau/nvkm/subdev/secboot/gm200.c @@ -68,6 +68,8 @@ gm200_secboot_setup_falcon(struct gm200_secboot *gsb, struct nvkm_acr *acr) /* Set boot vector to code's starting virtual address */ nvkm_wr32(device, base + 0x104, acr->start_address); + /* Set default error value in mailbox register */ + nvkm_wr32(device, base + 0x040, 0xdeada5a5); /* Clear mailbox register used to reflect capabilities */ nvkm_wr32(device, base + 0x044, 0x0); -- git-series 0.8.10
Alexandre Courbot
2016-Nov-21 08:29 UTC
[Nouveau] [PATCH v4 18/33] secboot: fix WPR descriptor generation
Generate the WPR descriptor closer to what RM does. In particular, set the expected masks, and only set the ucode members on Tegra. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/nvkm/subdev/secboot/acr_r352.c | 21 ++++++++++++--------- 1 file changed, 12 insertions(+), 9 deletions(-) diff --git a/drm/nouveau/nvkm/subdev/secboot/acr_r352.c b/drm/nouveau/nvkm/subdev/secboot/acr_r352.c index cc999bd60007..fcde0634f209 100644 --- a/drm/nouveau/nvkm/subdev/secboot/acr_r352.c +++ b/drm/nouveau/nvkm/subdev/secboot/acr_r352.c @@ -495,19 +495,22 @@ acr_r352_fixup_hs_desc(struct acr_r352 *acr, struct nvkm_secboot *sb, { struct nvkm_gpuobj *ls_blob = acr->ls_blob; - desc->ucode_blob_base = ls_blob->addr; - desc->ucode_blob_size = ls_blob->size; - - desc->wpr_offset = 0; - /* WPR region information if WPR is not fixed */ if (sb->wpr_size == 0) { + u32 wpr_start = ls_blob->addr; + u32 wpr_end = wpr_start + ls_blob->size; + desc->wpr_region_id = 1; - desc->regions.no_regions = 1; + desc->regions.no_regions = 2; + desc->regions.region_props[0].start_addr = wpr_start >> 8; + desc->regions.region_props[0].end_addr = wpr_end >> 8; desc->regions.region_props[0].region_id = 1; - desc->regions.region_props[0].start_addr = ls_blob->addr >> 8; - desc->regions.region_props[0].end_addr - (ls_blob->addr + ls_blob->size) >> 8; + desc->regions.region_props[0].read_mask = 0xf; + desc->regions.region_props[0].write_mask = 0xc; + desc->regions.region_props[0].client_mask = 0x2; + } else { + desc->ucode_blob_base = ls_blob->addr; + desc->ucode_blob_size = ls_blob->size; } } -- git-series 0.8.10
Alexandre Courbot
2016-Nov-21 08:29 UTC
[Nouveau] [PATCH v4 19/33] secboot: add lazy-bootstrap flag
When the PMU firmware is present, the falcons it manages need to have the lazy-bootstrap flag of their WPR header set so the ACR does not boot them. Add support for this. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/nvkm/subdev/secboot/acr_r352.c | 4 ++++ drm/nouveau/nvkm/subdev/secboot/acr_r352.h | 3 +++ 2 files changed, 7 insertions(+), 0 deletions(-) diff --git a/drm/nouveau/nvkm/subdev/secboot/acr_r352.c b/drm/nouveau/nvkm/subdev/secboot/acr_r352.c index fcde0634f209..ddecec4d2dbc 100644 --- a/drm/nouveau/nvkm/subdev/secboot/acr_r352.c +++ b/drm/nouveau/nvkm/subdev/secboot/acr_r352.c @@ -240,6 +240,10 @@ acr_r352_ls_img_fill_headers(struct acr_r352 *acr, whdr->bootstrap_owner = acr->base.boot_falcon; whdr->status = LSF_IMAGE_STATUS_COPY; + /* Skip bootstrapping falcons started by someone else than ACR */ + if (acr->lazy_bootstrap & BIT(_img->falcon_id)) + whdr->lazy_bootstrap = 1; + /* Align, save off, and include an LSB header size */ offset = ALIGN(offset, LSF_LSB_HEADER_ALIGN); whdr->lsb_offset = offset; diff --git a/drm/nouveau/nvkm/subdev/secboot/acr_r352.h b/drm/nouveau/nvkm/subdev/secboot/acr_r352.h index 18dd3d95cc56..b92125abfc7b 100644 --- a/drm/nouveau/nvkm/subdev/secboot/acr_r352.h +++ b/drm/nouveau/nvkm/subdev/secboot/acr_r352.h @@ -223,6 +223,9 @@ struct acr_r352 { /* Firmware already loaded? */ bool firmware_ok; + /* Falcons to lazy-bootstrap */ + u32 lazy_bootstrap; + /* To keep track of the state of all managed falcons */ enum { /* In non-secure state, no firmware loaded, no privileges*/ -- git-series 0.8.10
Alexandre Courbot
2016-Nov-21 08:29 UTC
[Nouveau] [PATCH v4 20/33] secboot: store falcon's DMEM size in secboot structure
Store the falcon's DMEM size in the secboot structure so it can be retrieved later. This is needed to load the PMU LS firmware's argument at the end of DMEM, where the LS firmware expects it to be. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/nvkm/subdev/secboot/acr.h | 2 ++ drm/nouveau/nvkm/subdev/secboot/base.c | 3 +++ 2 files changed, 5 insertions(+), 0 deletions(-) diff --git a/drm/nouveau/nvkm/subdev/secboot/acr.h b/drm/nouveau/nvkm/subdev/secboot/acr.h index 7ce11379f6f7..175f14fbda61 100644 --- a/drm/nouveau/nvkm/subdev/secboot/acr.h +++ b/drm/nouveau/nvkm/subdev/secboot/acr.h @@ -51,6 +51,7 @@ struct nvkm_acr_func { * @boot_falcon: ID of the falcon that will perform secure boot * @managed_falcons: bitfield of falcons managed by this ACR * @start_address: virtual start address of the HS bootloader + * @dmem_size: size of DMEM of the managing falcon */ struct nvkm_acr { const struct nvkm_acr_func *func; @@ -59,6 +60,7 @@ struct nvkm_acr { enum nvkm_falconidx boot_falcon; unsigned long managed_falcons; u32 start_address; + u32 dmem_size; }; void *nvkm_acr_load_firmware(const struct nvkm_subdev *, const char *, size_t); diff --git a/drm/nouveau/nvkm/subdev/secboot/base.c b/drm/nouveau/nvkm/subdev/secboot/base.c index b393ae8b8b12..3e48eb93197d 100644 --- a/drm/nouveau/nvkm/subdev/secboot/base.c +++ b/drm/nouveau/nvkm/subdev/secboot/base.c @@ -330,6 +330,9 @@ nvkm_secboot_ctor(const struct nvkm_secboot_func *func, struct nvkm_acr *acr, sb->debug_mode = (val >> 20) & 0x1; val = nvkm_rd32(device, sb->base + 0x108); + sb->acr->dmem_size = ((val >> 9) & 0x1ff) << 8; + + val = nvkm_rd32(device, sb->base + 0x108); nvkm_debug(&sb->subdev, "using %s falcon in %s mode\n", nvkm_falcon_name[acr->boot_falcon], -- git-series 0.8.10
Alexandre Courbot
2016-Nov-21 08:29 UTC
[Nouveau] [PATCH v4 21/33] secboot: clear halt interrupt after ACR is run
The halt interrupt must be cleared after ACR is run, otherwise the LS PMU firmware will not be able to run. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/nvkm/subdev/secboot/acr_r352.c | 3 ++- drm/nouveau/nvkm/subdev/secboot/base.c | 31 +++++++++++------------ drm/nouveau/nvkm/subdev/secboot/priv.h | 1 +- 3 files changed, 19 insertions(+), 16 deletions(-) diff --git a/drm/nouveau/nvkm/subdev/secboot/acr_r352.c b/drm/nouveau/nvkm/subdev/secboot/acr_r352.c index ddecec4d2dbc..534a2a5ec25b 100644 --- a/drm/nouveau/nvkm/subdev/secboot/acr_r352.c +++ b/drm/nouveau/nvkm/subdev/secboot/acr_r352.c @@ -766,6 +766,8 @@ acr_r352_shutdown(struct acr_r352 *acr, struct nvkm_secboot *sb) static int acr_r352_bootstrap(struct acr_r352 *acr, struct nvkm_secboot *sb) { + struct nvkm_subdev *subdev = &sb->subdev; + struct nvkm_device *device = subdev->device; int ret; if (sb->wpr_set) @@ -778,6 +780,7 @@ acr_r352_bootstrap(struct acr_r352 *acr, struct nvkm_secboot *sb) nvkm_debug(&sb->subdev, "running HS load blob\n"); ret = sb->func->run_blob(sb, acr->load_blob); + nvkm_secboot_falcon_clear_halt_interrupt(device, sb->base); if (ret) return ret; nvkm_debug(&sb->subdev, "HS load blob completed\n"); diff --git a/drm/nouveau/nvkm/subdev/secboot/base.c b/drm/nouveau/nvkm/subdev/secboot/base.c index 3e48eb93197d..56c752c9ef42 100644 --- a/drm/nouveau/nvkm/subdev/secboot/base.c +++ b/drm/nouveau/nvkm/subdev/secboot/base.c @@ -93,21 +93,6 @@ */ static int -falcon_clear_halt_interrupt(struct nvkm_device *device, u32 base) -{ - int ret; - - /* clear halt interrupt */ - nvkm_mask(device, base + 0x004, 0x10, 0x10); - /* wait until halt interrupt is cleared */ - ret = nvkm_wait_msec(device, 10, base + 0x008, 0x10, 0x0); - if (ret < 0) - return ret; - - return 0; -} - -static int falcon_wait_idle(struct nvkm_device *device, u32 base) { int ret; @@ -203,13 +188,27 @@ nvkm_secboot_falcon_run(struct nvkm_secboot *sb) ret = nvkm_rd32(device, sb->base + 0x040); if (ret) { nvkm_error(&sb->subdev, "ACR boot failed, ret 0x%08x", ret); - falcon_clear_halt_interrupt(device, sb->base); return -EINVAL; } return 0; } +int +nvkm_secboot_falcon_clear_halt_interrupt(struct nvkm_device *device, u32 base) +{ + int ret; + + /* clear halt interrupt */ + nvkm_mask(device, base + 0x004, 0x10, 0x10); + /* wait until halt interrupt is cleared */ + ret = nvkm_wait_msec(device, 10, base + 0x008, 0x10, 0x0); + if (ret < 0) + return ret; + + return 0; +} + /** * nvkm_secboot_reset() - reset specified falcon */ diff --git a/drm/nouveau/nvkm/subdev/secboot/priv.h b/drm/nouveau/nvkm/subdev/secboot/priv.h index 75a3b995fdbb..bd397896bd54 100644 --- a/drm/nouveau/nvkm/subdev/secboot/priv.h +++ b/drm/nouveau/nvkm/subdev/secboot/priv.h @@ -37,6 +37,7 @@ int nvkm_secboot_ctor(const struct nvkm_secboot_func *, struct nvkm_acr *, struct nvkm_device *, int, struct nvkm_secboot *); int nvkm_secboot_falcon_reset(struct nvkm_secboot *); int nvkm_secboot_falcon_run(struct nvkm_secboot *); +int nvkm_secboot_falcon_clear_halt_interrupt(struct nvkm_device *, u32); struct flcn_u64 { -- git-series 0.8.10
Alexandre Courbot
2016-Nov-21 08:29 UTC
[Nouveau] [PATCH v4 22/33] core: add falcon DMEM read function
Add nvkm_falcon_read_dmem() to read part of a falcon's DMEM into a CPU-accessible buffer. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/include/nvkm/core/falcon.h | 1 + drm/nouveau/nvkm/core/falcon.c | 10 ++++++++++ 2 files changed, 11 insertions(+), 0 deletions(-) diff --git a/drm/nouveau/include/nvkm/core/falcon.h b/drm/nouveau/include/nvkm/core/falcon.h index 530119847163..954296f0085b 100644 --- a/drm/nouveau/include/nvkm/core/falcon.h +++ b/drm/nouveau/include/nvkm/core/falcon.h @@ -46,5 +46,6 @@ extern const char *nvkm_falcon_name[]; void nvkm_falcon_load_imem(struct nvkm_device *, u32, void *, u32, u32, u32); void nvkm_falcon_load_dmem(struct nvkm_device *, u32, void *, u32, u32); +void nvkm_falcon_read_dmem(struct nvkm_device *, u32, u32, u32, void *); #endif diff --git a/drm/nouveau/nvkm/core/falcon.c b/drm/nouveau/nvkm/core/falcon.c index 806de4088a29..cc6c2808b53b 100644 --- a/drm/nouveau/nvkm/core/falcon.c +++ b/drm/nouveau/nvkm/core/falcon.c @@ -60,3 +60,13 @@ nvkm_falcon_load_dmem(struct nvkm_device *device, u32 base, void *data, nvkm_wr32(device, base + 0x1c4, ((u32 *)data)[i]); } +void +nvkm_falcon_read_dmem(struct nvkm_device *device, u32 base, u32 start, u32 size, + void *data) +{ + int i; + + nvkm_wr32(device, base + 0x1c0, start | (0x1 << 25)); + for (i = 0; i < size / 4; i++) + ((u32 *)data)[i] = nvkm_rd32(device, base + 0x1c4); +} -- git-series 0.8.10
Alexandre Courbot
2016-Nov-21 08:29 UTC
[Nouveau] [PATCH v4 23/33] pmu: add nvkm_pmu_ctor function
Add a constructor function that can be called by our gm200 implementation. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/nvkm/subdev/pmu/base.c | 14 ++++++++++---- drm/nouveau/nvkm/subdev/pmu/priv.h | 2 ++ 2 files changed, 12 insertions(+), 4 deletions(-) diff --git a/drm/nouveau/nvkm/subdev/pmu/base.c b/drm/nouveau/nvkm/subdev/pmu/base.c index e611ce80f8ef..c7f432ca79ca 100644 --- a/drm/nouveau/nvkm/subdev/pmu/base.c +++ b/drm/nouveau/nvkm/subdev/pmu/base.c @@ -128,6 +128,15 @@ nvkm_pmu = { .intr = nvkm_pmu_intr, }; +void nvkm_pmu_ctor(const struct nvkm_pmu_func *func, struct nvkm_device *device, + int index, struct nvkm_pmu *pmu) +{ + nvkm_subdev_ctor(&nvkm_pmu, device, index, &pmu->subdev); + pmu->func = func; + INIT_WORK(&pmu->recv.work, nvkm_pmu_recv); + init_waitqueue_head(&pmu->recv.wait); +} + int nvkm_pmu_new_(const struct nvkm_pmu_func *func, struct nvkm_device *device, int index, struct nvkm_pmu **ppmu) @@ -135,9 +144,6 @@ nvkm_pmu_new_(const struct nvkm_pmu_func *func, struct nvkm_device *device, struct nvkm_pmu *pmu; if (!(pmu = *ppmu = kzalloc(sizeof(*pmu), GFP_KERNEL))) return -ENOMEM; - nvkm_subdev_ctor(&nvkm_pmu, device, index, &pmu->subdev); - pmu->func = func; - INIT_WORK(&pmu->recv.work, nvkm_pmu_recv); - init_waitqueue_head(&pmu->recv.wait); + nvkm_pmu_ctor(func, device, index, pmu); return 0; } diff --git a/drm/nouveau/nvkm/subdev/pmu/priv.h b/drm/nouveau/nvkm/subdev/pmu/priv.h index 2e2179a4ad17..12b81ae1b114 100644 --- a/drm/nouveau/nvkm/subdev/pmu/priv.h +++ b/drm/nouveau/nvkm/subdev/pmu/priv.h @@ -4,6 +4,8 @@ #include <subdev/pmu.h> #include <subdev/pmu/fuc/os.h> +void nvkm_pmu_ctor(const struct nvkm_pmu_func *, struct nvkm_device *, int, + struct nvkm_pmu *); int nvkm_pmu_new_(const struct nvkm_pmu_func *, struct nvkm_device *, int index, struct nvkm_pmu **); -- git-series 0.8.10
Alexandre Courbot
2016-Nov-21 08:29 UTC
[Nouveau] [PATCH v4 24/33] pmu: make sure the reset hook exists before running it
Some PMU implementations (in particular the ones managed by secure boot) may not have a reset() hook. Make sure we don't crash in that case. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/nvkm/subdev/pmu/base.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drm/nouveau/nvkm/subdev/pmu/base.c b/drm/nouveau/nvkm/subdev/pmu/base.c index c7f432ca79ca..5548258a4510 100644 --- a/drm/nouveau/nvkm/subdev/pmu/base.c +++ b/drm/nouveau/nvkm/subdev/pmu/base.c @@ -85,7 +85,8 @@ nvkm_pmu_reset(struct nvkm_pmu *pmu) ); /* Reset. */ - pmu->func->reset(pmu); + if (pmu->func->reset) + pmu->func->reset(pmu); /* Wait for IMEM/DMEM scrubbing to be complete. */ nvkm_msec(device, 2000, -- git-series 0.8.10
From: Deepak Goyal <dgoyal at nvidia.com> Add support for NVIDIA-signed PMU firmware for the GM20X family of chips. This includes the way commands and message queues are handled, as well as core interfaces for secure boot to signal the PMU firmware version used and to generate the proper command line for it, and a new interface function to boot a given falcon using the PMU's ACR unit. Signed-off-by: Deepak Goyal <dgoyal at nvidia.com> [acourbot at nvidia.com: reorganize code] Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/include/nvkm/subdev/pmu.h | 11 +- drm/nouveau/nvkm/subdev/pmu/Kbuild | 1 +- drm/nouveau/nvkm/subdev/pmu/base.c | 47 ++- drm/nouveau/nvkm/subdev/pmu/gm200.c | 713 +++++++++++++++++++++++++++- drm/nouveau/nvkm/subdev/pmu/gm200.h | 104 ++++- drm/nouveau/nvkm/subdev/pmu/nv_pmu.h | 50 ++- drm/nouveau/nvkm/subdev/pmu/priv.h | 18 +- 7 files changed, 944 insertions(+), 0 deletions(-) create mode 100644 drm/nouveau/nvkm/subdev/pmu/gm200.c create mode 100644 drm/nouveau/nvkm/subdev/pmu/gm200.h create mode 100644 drm/nouveau/nvkm/subdev/pmu/nv_pmu.h diff --git a/drm/nouveau/include/nvkm/subdev/pmu.h b/drm/nouveau/include/nvkm/subdev/pmu.h index f37538eb1fe5..151003e0500e 100644 --- a/drm/nouveau/include/nvkm/subdev/pmu.h +++ b/drm/nouveau/include/nvkm/subdev/pmu.h @@ -1,9 +1,11 @@ #ifndef __NVKM_PMU_H__ #define __NVKM_PMU_H__ #include <core/subdev.h> +#include <core/falcon.h> struct nvkm_pmu { const struct nvkm_pmu_func *func; + const struct nv_pmu_func *nv_func; struct nvkm_subdev subdev; struct { @@ -27,6 +29,14 @@ int nvkm_pmu_send(struct nvkm_pmu *, u32 reply[2], u32 process, u32 message, u32 data0, u32 data1); void nvkm_pmu_pgob(struct nvkm_pmu *, bool enable); +/* useful if we run a NVIDIA-signed firmware */ +int nvkm_pmu_set_version(struct nvkm_pmu *, u32); +u32 nvkm_pmu_cmdline_size(struct nvkm_pmu *); +void nvkm_pmu_write_cmdline(struct nvkm_pmu *, void *); + +/* interface to ACR unit running on PMU (NVIDIA signed firmware) */ +int nvkm_pmu_acr_boot_falcon(struct nvkm_pmu *, enum nvkm_falconidx); + int gt215_pmu_new(struct nvkm_device *, int, struct nvkm_pmu **); int gf100_pmu_new(struct nvkm_device *, int, struct nvkm_pmu **); int gf119_pmu_new(struct nvkm_device *, int, struct nvkm_pmu **); @@ -35,6 +45,7 @@ int gk110_pmu_new(struct nvkm_device *, int, struct nvkm_pmu **); int gk208_pmu_new(struct nvkm_device *, int, struct nvkm_pmu **); int gk20a_pmu_new(struct nvkm_device *, int, struct nvkm_pmu **); int gm107_pmu_new(struct nvkm_device *, int, struct nvkm_pmu **); +int gm200_pmu_new(struct nvkm_device *, int, struct nvkm_pmu **); int gp100_pmu_new(struct nvkm_device *, int, struct nvkm_pmu **); int gp102_pmu_new(struct nvkm_device *, int, struct nvkm_pmu **); diff --git a/drm/nouveau/nvkm/subdev/pmu/Kbuild b/drm/nouveau/nvkm/subdev/pmu/Kbuild index 51fb4bf94a44..141f3ee6ffe4 100644 --- a/drm/nouveau/nvkm/subdev/pmu/Kbuild +++ b/drm/nouveau/nvkm/subdev/pmu/Kbuild @@ -8,5 +8,6 @@ nvkm-y += nvkm/subdev/pmu/gk110.o nvkm-y += nvkm/subdev/pmu/gk208.o nvkm-y += nvkm/subdev/pmu/gk20a.o nvkm-y += nvkm/subdev/pmu/gm107.o +nvkm-y += nvkm/subdev/pmu/gm200.o nvkm-y += nvkm/subdev/pmu/gp100.o nvkm-y += nvkm/subdev/pmu/gp102.o diff --git a/drm/nouveau/nvkm/subdev/pmu/base.c b/drm/nouveau/nvkm/subdev/pmu/base.c index 5548258a4510..20bd5585df15 100644 --- a/drm/nouveau/nvkm/subdev/pmu/base.c +++ b/drm/nouveau/nvkm/subdev/pmu/base.c @@ -120,6 +120,53 @@ nvkm_pmu_dtor(struct nvkm_subdev *subdev) return nvkm_pmu(subdev); } +u32 nvkm_pmu_cmdline_size(struct nvkm_pmu *pmu) +{ + if (!pmu || !pmu->nv_func || !pmu->nv_func->init) + return 0; + + return pmu->nv_func->init->cmdline_size; +} + +void +nvkm_pmu_write_cmdline(struct nvkm_pmu *pmu, void *buf) +{ + if (!pmu || !pmu->nv_func || !pmu->nv_func->init) + return; + + pmu->nv_func->init->gen_cmdline(pmu, buf); +} + +int +nvkm_pmu_acr_boot_falcon(struct nvkm_pmu *pmu, enum nvkm_falconidx falcon) +{ + if (!pmu || !pmu->nv_func || !pmu->nv_func->acr || + !pmu->nv_func->acr->boot_falcon) + return -ENODEV; + + return pmu->nv_func->acr->boot_falcon(pmu, falcon); +} + +int +nvkm_pmu_set_version(struct nvkm_pmu *pmu, u32 version) +{ + struct nvkm_subdev *subdev = &pmu->subdev; + + if (!pmu) + return -ENODEV; + + switch (version) { + default: + nvkm_error(subdev, "unhandled firmware version 0x%08x\n", + version); + return -EINVAL; + }; + + nvkm_debug(subdev, "firmware version: 0x%08x\n", version); + + return 0; +} + static const struct nvkm_subdev_func nvkm_pmu = { .dtor = nvkm_pmu_dtor, diff --git a/drm/nouveau/nvkm/subdev/pmu/gm200.c b/drm/nouveau/nvkm/subdev/pmu/gm200.c new file mode 100644 index 000000000000..11bf24f35d21 --- /dev/null +++ b/drm/nouveau/nvkm/subdev/pmu/gm200.c @@ -0,0 +1,713 @@ +/* + * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + */ +#include "gm200.h" +#include <core/falcon.h> + +/* Max size of the messages we can receive */ +#define PMU_MSG_BUF_SIZE 128 + +#define PMU_UNIT_ID_IS_VALID(id) \ + (((id) < PMU_UNIT_END) || ((id) >= PMU_UNIT_TEST_START)) + +#define PMU_CMD_FLAGS_STATUS BIT(0) +#define PMU_CMD_FLAGS_INTR BIT(1) + + +#define PMU_IS_COMMAND_QUEUE(id) \ + ((id) < PMU_MESSAGE_QUEUE) + +#define PMU_IS_SW_COMMAND_QUEUE(id) \ + (((id) == PMU_COMMAND_QUEUE_HPQ) || ((id) == PMU_COMMAND_QUEUE_LPQ)) + +#define PMU_IS_MESSAGE_QUEUE(id) \ + ((id) == PMU_MESSAGE_QUEUE) + +#define QUEUE_ALIGNMENT 4 + +#define PMU_CMD_HDR_SIZE sizeof(struct pmu_hdr) +#define PMU_MSG_HDR_SIZE sizeof(struct pmu_hdr) + + + +static void +pmu_copy_to_dmem(struct gm200_pmu *priv, u32 dst, void *src, u32 size) +{ + struct nvkm_device *device = priv->base.subdev.device; + + mutex_lock(&priv->copy_lock); + + nvkm_falcon_load_dmem(device, 0x10a000, src, dst, size); + + mutex_unlock(&priv->copy_lock); +} + +static void +pmu_copy_from_dmem(struct gm200_pmu *priv, u32 src, void *dst, u32 size) +{ + struct nvkm_device *device = priv->base.subdev.device; + + mutex_lock(&priv->copy_lock); + + nvkm_falcon_read_dmem(device, 0x10a000, src, size, dst); + + mutex_unlock(&priv->copy_lock); +} + +static int +pmu_seq_acquire(struct gm200_pmu *priv, struct gm200_pmu_sequence **pseq) +{ + struct nvkm_subdev *subdev = &priv->base.subdev; + struct gm200_pmu_sequence *seq; + u32 index; + + mutex_lock(&priv->seq_lock); + index = find_first_zero_bit(priv->seq_tbl, GM200_PMU_NUM_SEQUENCES); + + if (index >= GM200_PMU_NUM_SEQUENCES) { + nvkm_error(subdev, "no free sequence available\n"); + mutex_unlock(&priv->seq_lock); + return -EAGAIN; + } + + set_bit(index, priv->seq_tbl); + mutex_unlock(&priv->seq_lock); + seq = &priv->seq[index]; + seq->state = SEQ_STATE_PENDING; + *pseq = seq; + + return 0; +} + +static void +pmu_seq_release(struct gm200_pmu *pmu, struct gm200_pmu_sequence *seq) +{ + seq->state = SEQ_STATE_FREE; + seq->callback = NULL; + seq->msg = NULL; + seq->completion = NULL; + clear_bit(seq->id, pmu->seq_tbl); +} + +static int +pmu_queue_head_get(struct gm200_pmu *priv, struct gm200_pmu_queue *queue, + u32 *head) +{ + struct nvkm_device *device = priv->base.subdev.device; + + if (PMU_IS_COMMAND_QUEUE(queue->id)) + *head = nvkm_rd32(device, 0x0010a4a0 + (queue->index * 4)); + else + *head = nvkm_rd32(device, 0x0010a4c8); + + return 0; +} + +static int +pmu_queue_head_set(struct gm200_pmu *priv, struct gm200_pmu_queue *queue, + u32 head) +{ + struct nvkm_device *device = priv->base.subdev.device; + + if (PMU_IS_COMMAND_QUEUE(queue->id)) + nvkm_wr32(device, 0x0010a4a0 + (queue->index * 4), head); + else + nvkm_wr32(device, 0x0010a4c8, head); + + return 0; +} + +static int +pmu_queue_tail_get(struct gm200_pmu *priv, struct gm200_pmu_queue *queue, + u32 *tail) +{ + struct nvkm_device *device = priv->base.subdev.device; + + if (PMU_IS_COMMAND_QUEUE(queue->id)) + *tail = nvkm_rd32(device, 0x0010a4b0 + (queue->index * 4)); + else + *tail = nvkm_rd32(device, 0x0010a4cc); + + return 0; +} + +static int +pmu_queue_tail_set(struct gm200_pmu *priv, struct gm200_pmu_queue *queue, + u32 tail) +{ + struct nvkm_device *device = priv->base.subdev.device; + + if (PMU_IS_COMMAND_QUEUE(queue->id)) + nvkm_wr32(device, 0x0010a4b0 + (queue->index * 4), tail); + else + nvkm_wr32(device, 0x0010a4cc, tail); + + return 0; +} + +static int +pmu_queue_lock(struct gm200_pmu_queue *queue) +{ + if (PMU_IS_MESSAGE_QUEUE(queue->id)) + return 0; + + if (PMU_IS_SW_COMMAND_QUEUE(queue->id)) { + mutex_lock(&queue->mutex); + return 0; + } + + return -EINVAL; +} + +static int +pmu_queue_unlock(struct gm200_pmu_queue *queue) +{ + if (PMU_IS_MESSAGE_QUEUE(queue->id)) + return 0; + + if (PMU_IS_SW_COMMAND_QUEUE(queue->id)) { + mutex_unlock(&queue->mutex); + return 0; + } + + return -EINVAL; +} + +/* called by pmu_read_message, no lock */ +static bool +pmu_queue_is_empty(struct gm200_pmu *priv, struct gm200_pmu_queue *queue) +{ + u32 head, tail; + + pmu_queue_head_get(priv, queue, &head); + + if (queue->oflag == OFLAG_READ) + tail = queue->position; + else + pmu_queue_tail_get(priv, queue, &tail); + + return head == tail; +} + +static bool +pmu_queue_has_room(struct gm200_pmu *priv, struct gm200_pmu_queue *queue, + u32 size, bool *need_rewind) +{ + u32 head, tail, free; + bool rewind = false; + + size = ALIGN(size, QUEUE_ALIGNMENT); + + pmu_queue_head_get(priv, queue, &head); + pmu_queue_tail_get(priv, queue, &tail); + + if (head >= tail) { + free = queue->offset + queue->size - head; + free -= PMU_CMD_HDR_SIZE; + + if (size > free) { + rewind = true; + head = queue->offset; + } + } + + if (head < tail) + free = tail - head - 1; + + if (need_rewind) + *need_rewind = rewind; + + return size <= free; +} + +static int +pmu_queue_push(struct gm200_pmu *priv, struct gm200_pmu_queue *queue, + void *data, u32 size) +{ + struct nvkm_subdev *subdev = &priv->base.subdev; + + if (queue->oflag != OFLAG_WRITE) { + nvkm_error(subdev, "queue not opened for write\n"); + return -EINVAL; + } + + pmu_copy_to_dmem(priv, queue->position, data, size); + queue->position += ALIGN(size, QUEUE_ALIGNMENT); + + return 0; +} + +static int +pmu_queue_pop(struct gm200_pmu *priv, struct gm200_pmu_queue *queue, + void *data, u32 size, u32 *bytes_read) +{ + struct nvkm_subdev *subdev = &priv->base.subdev; + u32 head, tail, used; + + *bytes_read = 0; + + if (queue->oflag != OFLAG_READ) { + nvkm_error(subdev, "queue not opened for read\n"); + return -EINVAL; + } + + pmu_queue_head_get(priv, queue, &head); + if (head < queue->position) + queue->position = queue->offset; + tail = queue->position; + + if (head == tail) { + *bytes_read = 0; + return 0; + } + used = head - tail; + + if (size > used) { + nvkm_warn(subdev, "queue size smaller than read request\n"); + size = used; + } + + pmu_copy_from_dmem(priv, tail, data, size); + queue->position += ALIGN(size, QUEUE_ALIGNMENT); + *bytes_read = size; + + return 0; +} + +static void +pmu_queue_rewind(struct gm200_pmu *priv, struct gm200_pmu_queue *queue) +{ + struct nvkm_subdev *subdev = &priv->base.subdev; + struct pmu_hdr cmd; + int err; + + if (queue->oflag == OFLAG_CLOSED) { + nvkm_error(subdev, "queue not opened\n"); + return; + } + + if (queue->oflag == OFLAG_WRITE) { + cmd.unit_id = PMU_UNIT_REWIND; + cmd.size = sizeof(cmd); + err = pmu_queue_push(priv, queue, &cmd, cmd.size); + if (err) + nvkm_error(subdev, "pmu_queue_push failed\n"); + + nvkm_debug(subdev, "queue %d rewinded\n", queue->id); + } + + queue->position = queue->offset; +} + +/* Open for read and lock the queue */ +static int +pmu_queue_open_read(struct gm200_pmu *priv, struct gm200_pmu_queue *queue) +{ + int err; + + err = pmu_queue_lock(queue); + if (err) + return err; + + if (WARN_ON(queue->oflag != OFLAG_CLOSED)) { + pmu_queue_unlock(queue); + return -EBUSY; + } + + pmu_queue_tail_get(priv, queue, &queue->position); + queue->oflag = OFLAG_READ; + + return 0; +} + +/** + * open for write and lock the queue + * make sure there's enough free space for the write + */ +static int +pmu_queue_open_write(struct gm200_pmu *priv, struct gm200_pmu_queue *queue, + u32 size) +{ + struct nvkm_subdev *subdev = &priv->base.subdev; + bool rewind = false; + int err; + + err = pmu_queue_lock(queue); + if (err) + return err; + + if (WARN_ON(queue->oflag != OFLAG_CLOSED)) { + pmu_queue_unlock(queue); + return -EBUSY; + } + + if (!pmu_queue_has_room(priv, queue, size, &rewind)) { + nvkm_error(subdev, "queue full\n"); + pmu_queue_unlock(queue); + return -EAGAIN; + } + + pmu_queue_head_get(priv, queue, &queue->position); + queue->oflag = OFLAG_WRITE; + + if (rewind) + pmu_queue_rewind(priv, queue); + + return 0; +} + +static int +pmu_queue_close(struct gm200_pmu *priv, struct gm200_pmu_queue *queue, + bool commit) +{ + struct nvkm_subdev *subdev = &priv->base.subdev; + + if (WARN_ON(queue->oflag == OFLAG_CLOSED)) { + nvkm_warn(subdev, "queue alpmu_ready closed\n"); + return 0; + } + + if (commit) { + if (queue->oflag == OFLAG_READ) + pmu_queue_tail_set(priv, queue, queue->position); + else + pmu_queue_head_set(priv, queue, queue->position); + } + + queue->oflag = OFLAG_CLOSED; + pmu_queue_unlock(queue); + + return 0; +} + +static bool +pmu_check_cmd_params(struct gm200_pmu *priv, struct pmu_hdr *cmd, + struct pmu_hdr *msg, u32 queue_id) +{ + struct gm200_pmu_queue *queue; + + if (!PMU_IS_SW_COMMAND_QUEUE(queue_id)) + return false; + + queue = &priv->queue[queue_id]; + if (cmd->size < PMU_CMD_HDR_SIZE) + return false; + + if (cmd->size > (queue->size / 2)) + return false; + + if (msg != NULL && msg->size < PMU_MSG_HDR_SIZE) + return false; + + if (!PMU_UNIT_ID_IS_VALID(cmd->unit_id)) + return false; + + return true; +} + +static int +pmu_cmd_write(struct gm200_pmu *priv, struct pmu_hdr *cmd, u32 queue_id) +{ + struct nvkm_subdev *subdev = &priv->base.subdev; + static unsigned long timeout = ~0; + unsigned long end_jiffies = jiffies + msecs_to_jiffies(timeout); + int err = -EAGAIN; + int ret = 0; + bool commit = true; + struct gm200_pmu_queue *queue; + + queue = &priv->queue[queue_id]; + + while (err == -EAGAIN && time_before(jiffies, end_jiffies)) + err = pmu_queue_open_write(priv, queue, cmd->size); + if (err) { + nvkm_error(subdev, "pmu_queue_open_write failed\n"); + return err; + } + + err = pmu_queue_push(priv, queue, cmd, cmd->size); + if (err) { + nvkm_error(subdev, "pmu_queue_push failed\n"); + ret = err; + commit = false; + } + + err = pmu_queue_close(priv, queue, commit); + if (err) { + nvkm_error(subdev, "fail to close queue-id %d\n", queue_id); + ret = err; + } + + return ret; +} + +int +nv_pmu_cmd_post(struct nvkm_pmu *pmu, struct pmu_hdr *cmd, struct pmu_hdr *msg, + enum nv_pmu_queue queue_id, nv_pmu_callback callback, + struct completion *completion) +{ + struct gm200_pmu *priv = gm200_pmu(pmu); + struct nvkm_subdev *subdev = &pmu->subdev; + struct gm200_pmu_sequence *seq; + int err; + + if (WARN_ON(!priv->ready)) + return -EINVAL; + + if (!pmu_check_cmd_params(priv, cmd, msg, queue_id)) { + nvkm_error(subdev, "invalid pmu cmd :\n" + "queue_id=%d,\n" + "cmd_size=%d, cmd_unit_id=%d, msg=%p\n", + queue_id, cmd->size, cmd->unit_id, msg); + return -EINVAL; + } + + err = pmu_seq_acquire(priv, &seq); + if (err) + return err; + + cmd->seq_id = seq->id; + cmd->ctrl_flags = PMU_CMD_FLAGS_STATUS | PMU_CMD_FLAGS_INTR; + + seq->callback = callback; + seq->msg = msg; + seq->state = SEQ_STATE_USED; + seq->completion = completion; + + err = pmu_cmd_write(priv, cmd, queue_id); + if (err) { + seq->state = SEQ_STATE_PENDING; + pmu_seq_release(priv, seq); + } + + return err; +} + +static bool +pmu_msg_read(struct gm200_pmu *priv, struct gm200_pmu_queue *queue, + struct pmu_hdr *hdr) +{ + struct nvkm_subdev *subdev = &priv->base.subdev; + bool commit = true; + int status = 0; + u32 read_size, bytes_read; + int err; + + + if (pmu_queue_is_empty(priv, queue)) + return false; + + err = pmu_queue_open_read(priv, queue); + if (err) { + nvkm_error(subdev, "fail to open queue %d\n", queue->id); + status |= err; + return false; + } + + err = pmu_queue_pop(priv, queue, hdr, PMU_MSG_HDR_SIZE, &bytes_read); + if (err || (bytes_read != PMU_MSG_HDR_SIZE)) { + nvkm_error(subdev, "fail to read from queue %d\n", queue->id); + status |= -EINVAL; + commit = false; + goto close; + } + + if (!PMU_UNIT_ID_IS_VALID(hdr->unit_id)) { + nvkm_error(subdev, "invalid unit_id %d\n", hdr->unit_id); + status |= -EINVAL; + commit = false; + goto close; + } + + if (hdr->size > PMU_MSG_BUF_SIZE) { + nvkm_error(subdev, "message too big (%d bytes)\n", hdr->size); + return -ENOSPC; + } + + if (hdr->size > PMU_MSG_HDR_SIZE) { + read_size = hdr->size - PMU_MSG_HDR_SIZE; + err = pmu_queue_pop(priv, queue, (hdr + 1), read_size, + &bytes_read); + if (err || (bytes_read != read_size)) { + nvkm_error(subdev, "fail to read from queue/n"); + status |= err; + commit = false; + goto close; + } + } + +close: + err = pmu_queue_close(priv, queue, commit); + if (err) { + nvkm_error(subdev, "fail to close queue %d", queue->id); + status |= err; + } + + if (status) + return false; + + return true; +} + +static int +pmu_msg_handle(struct gm200_pmu *priv, struct pmu_hdr *hdr) +{ + struct nvkm_pmu *pmu = &priv->base; + struct nvkm_subdev *subdev = &priv->base.subdev; + struct gm200_pmu_sequence *seq; + + seq = &priv->seq[hdr->seq_id]; + if (seq->state != SEQ_STATE_USED && seq->state != SEQ_STATE_CANCELLED) { + nvkm_error(subdev, "msg for an unknown sequence %d", seq->id); + return -EINVAL; + } + + if (seq->state == SEQ_STATE_USED) { + if (seq->callback) + seq->callback(pmu, hdr); + } + + if (seq->completion) + complete(seq->completion); + + pmu_seq_release(priv, seq); + + return 0; +} + +static int +gm200_pmu_handle_init_msg(struct nvkm_pmu *pmu, struct pmu_hdr *hdr) +{ + struct gm200_pmu *priv = gm200_pmu(pmu); + struct nvkm_subdev *subdev = &priv->base.subdev; + struct nvkm_device *device = subdev->device; + u32 tail; + int ret, i; + + /* + * Read the message - queues are not initialized yet so we cannot rely + * on pmu_msg_read + */ + tail = nvkm_rd32(device, 0x0010a4cc); + pmu_copy_from_dmem(priv, tail, hdr, PMU_MSG_HDR_SIZE); + + if (hdr->unit_id != PMU_UNIT_INIT) { + nvkm_error(subdev, "expected message from PMU\n"); + return -EINVAL; + } + + if (hdr->size > PMU_MSG_BUF_SIZE) { + nvkm_error(subdev, "message too big (%d bytes)\n", hdr->size); + return -ENOSPC; + } + + pmu_copy_from_dmem(priv, tail + PMU_MSG_HDR_SIZE, (hdr + 1), + hdr->size - PMU_MSG_HDR_SIZE); + + tail += ALIGN(hdr->size, QUEUE_ALIGNMENT); + nvkm_wr32(device, 0x0010a4cc, tail); + + ret = pmu->nv_func->init->init_callback(pmu, hdr); + if (ret) + return ret; + + for (i = 0; i < GM200_PMU_QUEUE_COUNT; i++) { + struct gm200_pmu_queue *queue = &priv->queue[i]; + + nvkm_debug(subdev, + "queue %d: index %d, offset 0x%08x, size 0x%08x\n", + i, queue->index, queue->offset, queue->size); + } + + priv->ready = true; + + /* Complete PMU initialization by initializing WPR region */ + pmu->nv_func->acr->init_wpr_region(pmu); + + return 0; +} + +static void +gm200_pmu_recv(struct nvkm_pmu *pmu) +{ + struct gm200_pmu *priv = gm200_pmu(pmu); + /* + * We are invoked from a worker thread, so normally we have plenty of + * stack space to work with. + */ + u8 msg_buffer[PMU_MSG_BUF_SIZE]; + struct pmu_hdr *hdr = (void *)msg_buffer; + + mutex_lock(&priv->isr_mutex); + + if ((!priv->ready)) + gm200_pmu_handle_init_msg(pmu, hdr); + else while (pmu_msg_read(priv, &priv->queue[PMU_MESSAGE_QUEUE], hdr)) + pmu_msg_handle(priv, hdr); + + mutex_unlock(&priv->isr_mutex); +} + +static int +gm200_pmu_init(struct nvkm_pmu *pmu) +{ + struct gm200_pmu *priv = (struct gm200_pmu *)pmu; + + priv->ready = false; + reinit_completion(&priv->init_done); + + return 0; +} + +static const struct nvkm_pmu_func +gm200_pmu = { + .init = gm200_pmu_init, + .intr = gt215_pmu_intr, + .recv = gm200_pmu_recv, +}; + +int +gm200_pmu_new(struct nvkm_device *device, int index, struct nvkm_pmu **ppmu) +{ + struct gm200_pmu *priv; + int i; + + priv = kzalloc(sizeof(*priv), GFP_KERNEL); + if (!priv) { + priv = NULL; + return -ENOMEM; + } + *ppmu = &priv->base; + + nvkm_pmu_ctor(&gm200_pmu, device, index, &priv->base); + + mutex_init(&priv->isr_mutex); + mutex_init(&priv->seq_lock); + mutex_init(&priv->copy_lock); + + for (i = 0; i < GM200_PMU_NUM_SEQUENCES; i++) + priv->seq[i].id = i; + + init_completion(&priv->init_done); + + return 0; +} diff --git a/drm/nouveau/nvkm/subdev/pmu/gm200.h b/drm/nouveau/nvkm/subdev/pmu/gm200.h new file mode 100644 index 000000000000..7b5d2e88b379 --- /dev/null +++ b/drm/nouveau/nvkm/subdev/pmu/gm200.h @@ -0,0 +1,104 @@ + +/* + * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + */ +#ifndef __NVKM_PMU_GM200_H_ +#define __NVKM_PMU_GM200_H_ + +#include "priv.h" +#include "nv_pmu.h" + +enum gm200_seq_state { + SEQ_STATE_FREE = 0, + SEQ_STATE_PENDING, + SEQ_STATE_USED, + SEQ_STATE_CANCELLED +}; + +struct gm200_pmu_sequence { + u16 id; + enum gm200_seq_state state; + struct pmu_hdr *msg; + nv_pmu_callback callback; + struct completion *completion; +}; + +/** + * Structure pmu_queue + * mutex_lock - used by sw, for LPQ/HPQ queue + * position - current write position + * offset - physical dmem offset where this queue begins + * id - logical queue identifier + * index - physical queue index + * size - in bytes + * oflag - flag to indentify open mode + */ +struct gm200_pmu_queue { + struct mutex mutex; + u32 id; + u32 index; + u32 offset; + u32 size; + u32 position; + enum { + OFLAG_CLOSED = 0, + OFLAG_READ, + OFLAG_WRITE, + } oflag; +}; + +#define GM200_PMU_QUEUE_COUNT 5 +#define GM200_PMU_NUM_SEQUENCES 256 + +struct gm200_pmu { + struct nvkm_pmu base; + bool ready; + struct completion init_done; + struct mutex isr_mutex; + struct mutex seq_lock; + struct mutex copy_lock; + struct gm200_pmu_queue queue[GM200_PMU_QUEUE_COUNT]; + struct gm200_pmu_sequence seq[GM200_PMU_NUM_SEQUENCES]; + unsigned long seq_tbl[BITS_TO_LONGS(GM200_PMU_NUM_SEQUENCES)]; +}; +#define gm200_pmu(ptr) container_of(ptr, struct gm200_pmu, base) + +/** + * Structure pmu_hdr - struct for cmd(that we send) or msg(that we receive). + * unit_id - Comp in PMU to/from which cmd sent or msg received. + * size - Total size of pmu cmd or pmu msg. + * ctrl_flags - Flag to indicate type of msg/cmd. + * seq_id - Sequence id to match a pmu msg to pmu cmd. + */ +struct pmu_hdr { + u8 unit_id; + u8 size; + u8 ctrl_flags; + u8 seq_id; +}; + +struct pmu_msg_base { + struct pmu_hdr hdr; + u8 msg_type; +}; + +#endif diff --git a/drm/nouveau/nvkm/subdev/pmu/nv_pmu.h b/drm/nouveau/nvkm/subdev/pmu/nv_pmu.h new file mode 100644 index 000000000000..4267952231f7 --- /dev/null +++ b/drm/nouveau/nvkm/subdev/pmu/nv_pmu.h @@ -0,0 +1,50 @@ +/* + * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + */ +#ifndef __NVKM_PMU_NV_PMU_H_ +#define __NVKM_PMU_NV_PMU_H_ + +typedef void (*nv_pmu_callback)(struct nvkm_pmu *, struct pmu_hdr *); + +/* Units we can communicate with using the PMU interface */ +enum nv_pmu_unit { + PMU_UNIT_REWIND = 0x00, + PMU_UNIT_INIT = 0x07, + PMU_UNIT_ACR = 0x0a, + PMU_UNIT_END = 0x23, + PMU_UNIT_TEST_START = 0xfe, +}; + +/* Queues identifiers */ +enum nv_pmu_queue { + /* High Priority Command Queue for Host -> PMU communication */ + PMU_COMMAND_QUEUE_HPQ = 0, + /* Low Priority Command Queue for Host -> PMU communication */ + PMU_COMMAND_QUEUE_LPQ = 1, + /* Message queue for PMU -> Host communication */ + PMU_MESSAGE_QUEUE = 4, +}; + +int nv_pmu_cmd_post(struct nvkm_pmu *, struct pmu_hdr *, struct pmu_hdr *, + enum nv_pmu_queue, nv_pmu_callback, struct completion *); + +#endif diff --git a/drm/nouveau/nvkm/subdev/pmu/priv.h b/drm/nouveau/nvkm/subdev/pmu/priv.h index 12b81ae1b114..b93b300a101a 100644 --- a/drm/nouveau/nvkm/subdev/pmu/priv.h +++ b/drm/nouveau/nvkm/subdev/pmu/priv.h @@ -9,6 +9,8 @@ void nvkm_pmu_ctor(const struct nvkm_pmu_func *, struct nvkm_device *, int, int nvkm_pmu_new_(const struct nvkm_pmu_func *, struct nvkm_device *, int index, struct nvkm_pmu **); +struct pmu_hdr; + struct nvkm_pmu_func { struct { u32 *data; @@ -37,5 +39,21 @@ void gt215_pmu_intr(struct nvkm_pmu *); void gt215_pmu_recv(struct nvkm_pmu *); int gt215_pmu_send(struct nvkm_pmu *, u32[2], u32, u32, u32, u32); +struct nv_pmu_init_func { + u32 cmdline_size; + void (*gen_cmdline)(struct nvkm_pmu *, void *buf); + int (*init_callback)(struct nvkm_pmu *, struct pmu_hdr *); +}; + +struct nv_pmu_acr_func { + int (*init_wpr_region)(struct nvkm_pmu *pmu); + int (*boot_falcon)(struct nvkm_pmu *, enum nvkm_falconidx); +}; + +struct nv_pmu_func { + const struct nv_pmu_init_func *init; + const struct nv_pmu_acr_func *acr; +}; + void gk110_pmu_pgob(struct nvkm_pmu *, bool); #endif -- git-series 0.8.10
Alexandre Courbot
2016-Nov-21 08:29 UTC
[Nouveau] [PATCH v4 26/33] pmu: support for GM20B signed firmware
From: Deepak Goyal <dgoyal at nvidia.com> Add support for the message format used by the GM20B signed PMU firmware. Signed-off-by: Deepak Goyal <dgoyal at nvidia.com> [acourbot at nvidia.com: reorganize code] Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/nvkm/subdev/pmu/Kbuild | 2 +- drm/nouveau/nvkm/subdev/pmu/base.c | 3 +- drm/nouveau/nvkm/subdev/pmu/nv_0137c63d.c | 255 +++++++++++++++++++++++- drm/nouveau/nvkm/subdev/pmu/nv_pmu.h | 3 +- drm/nouveau/nvkm/subdev/pmu/priv.h | 2 +- 5 files changed, 265 insertions(+), 0 deletions(-) create mode 100644 drm/nouveau/nvkm/subdev/pmu/nv_0137c63d.c diff --git a/drm/nouveau/nvkm/subdev/pmu/Kbuild b/drm/nouveau/nvkm/subdev/pmu/Kbuild index 141f3ee6ffe4..b20328cf897e 100644 --- a/drm/nouveau/nvkm/subdev/pmu/Kbuild +++ b/drm/nouveau/nvkm/subdev/pmu/Kbuild @@ -11,3 +11,5 @@ nvkm-y += nvkm/subdev/pmu/gm107.o nvkm-y += nvkm/subdev/pmu/gm200.o nvkm-y += nvkm/subdev/pmu/gp100.o nvkm-y += nvkm/subdev/pmu/gp102.o + +nvkm-y += nvkm/subdev/pmu/nv_0137c63d.o diff --git a/drm/nouveau/nvkm/subdev/pmu/base.c b/drm/nouveau/nvkm/subdev/pmu/base.c index 20bd5585df15..0ca9cafb18c0 100644 --- a/drm/nouveau/nvkm/subdev/pmu/base.c +++ b/drm/nouveau/nvkm/subdev/pmu/base.c @@ -156,6 +156,9 @@ nvkm_pmu_set_version(struct nvkm_pmu *pmu, u32 version) return -ENODEV; switch (version) { + case 0x0137c63d: + pmu->nv_func = &nv_0137c63d_func; + break; default: nvkm_error(subdev, "unhandled firmware version 0x%08x\n", version); diff --git a/drm/nouveau/nvkm/subdev/pmu/nv_0137c63d.c b/drm/nouveau/nvkm/subdev/pmu/nv_0137c63d.c new file mode 100644 index 000000000000..522f8624e1de --- /dev/null +++ b/drm/nouveau/nvkm/subdev/pmu/nv_0137c63d.c @@ -0,0 +1,255 @@ +/* + * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + */ +#include "gm200.h" +#include "nv_pmu.h" + +/** + * struct pmu_cmdline_args - PMU ucode commandline DMEM arguments + * @freq_hz: freq at which PMU falcon runs (Hz) + * @trace_buf: trace buf memory descriptor. Trace buffer can be used + * to dump traces from PMU ucode. + * @secure_mode: total size of the code part in the ucode + * @raise_priv_sec: raise priv level required for desired regs + * @gc6_ctx: dma info for GC6 context + * @init_data_dma_info: dma info for INIT data surface. + * + * Structure used for Global command-line arguments for the PMU + */ +struct pmu_cmdline_args { + u32 reserved; + u32 freq_hz; + u32 trace_size; + u32 trace_dma_base; + u16 trace_dma_base1; + u8 trace_dma_offset; + u32 trace_dma_idx; + bool secure_mode; + bool raise_priv_sec; + struct { + u32 dma_base; + u16 dma_base1; + u8 dma_offset; + u16 fb_size; + u8 dma_idx; + } gc6_ctx; + u8 pad; +}; + +static void +pmu_gen_cmdline(struct nvkm_pmu *pmu, void *buf) +{ + struct pmu_cmdline_args *args = buf; + + args->secure_mode = 1; +} + +enum { + PMU_INIT_MSG_TYPE_PMU_INIT = 0x0, +}; + +struct pmu_init_msg { + struct pmu_msg_base base; + + u8 pad; + u16 os_debug_entry_point; + + struct { + u16 size; + u16 offset; + u8 index; + u8 pad; + } queue_info[GM200_PMU_QUEUE_COUNT]; + + u16 sw_managed_area_offset; + u16 sw_managed_area_size; + struct { + bool is_valid; + u8 version; + u32 status; + u8 hulk_data[4]; + u32 vpr_data[2]; + } brsdata; +}; + +static int +pmu_init_callback(struct nvkm_pmu *pmu, struct pmu_hdr *hdr) +{ + struct pmu_init_msg *init = (void *)hdr; + struct gm200_pmu *priv = gm200_pmu(pmu); + struct nvkm_subdev *subdev = &pmu->subdev; + int i; + + if (init->base.msg_type != PMU_INIT_MSG_TYPE_PMU_INIT) { + nvkm_error(subdev, "expected PMU init msg\n"); + return -EINVAL; + } + + for (i = 0; i < GM200_PMU_QUEUE_COUNT; i++) { + struct gm200_pmu_queue *queue = &priv->queue[i]; + + queue->id = i; + queue->index = init->queue_info[i].index; + queue->offset = init->queue_info[i].offset; + queue->size = init->queue_info[i].size; + mutex_init(&queue->mutex); + } + + return 0; +} + +const struct nv_pmu_init_func +nv_0137c63d_init_func = { + .cmdline_size = sizeof(struct pmu_cmdline_args), + .gen_cmdline = pmu_gen_cmdline, + .init_callback = pmu_init_callback, +}; + +/* ACR commands */ +enum { + PMU_ACR_CMD_ID_INIT_WPR_REGION = 0x0, + PMU_ACR_CMD_ID_BOOTSTRAP_FALCON, +}; + +struct acr_init_wpr_msg { + struct pmu_msg_base base; + + u32 error_code; +}; + +static void +acr_init_wpr_callback(struct nvkm_pmu *pmu, struct pmu_hdr *hdr) +{ + struct acr_init_wpr_msg *msg = (void *)hdr; + struct nvkm_subdev *subdev = &pmu->subdev; + struct gm200_pmu *priv = gm200_pmu(pmu); + + if (msg->error_code) { + nvkm_error(subdev, "ACR WPR init failure: %d\n", + msg->error_code); + return; + } + + nvkm_debug(subdev, "ACR WPR init complete\n"); + complete_all(&priv->init_done); +} + +static int +acr_init_wpr(struct nvkm_pmu *pmu) +{ + /* + * regionid - specifying region ID in WPR. + * wpr_offset - wpr offset in WPR region. + */ + struct { + struct pmu_hdr hdr; + u8 cmd_type; + u32 region_id; + u32 wpr_offset; + } cmd; + memset(&cmd, 0, sizeof(cmd)); + + cmd.hdr.unit_id = PMU_UNIT_ACR; + cmd.hdr.size = sizeof(cmd); + cmd.cmd_type = PMU_ACR_CMD_ID_INIT_WPR_REGION; + cmd.region_id = 0x01; + cmd.wpr_offset = 0x00; + + nv_pmu_cmd_post(pmu, &cmd.hdr, NULL, PMU_COMMAND_QUEUE_HPQ, + acr_init_wpr_callback, NULL); + + return 0; +} + +struct acr_bootstrap_falcon_msg { + struct pmu_msg_base base; + + u32 falcon_id; +}; + +static void +acr_boot_falcon_callback(struct nvkm_pmu *pmu, struct pmu_hdr *hdr) +{ + struct acr_bootstrap_falcon_msg *msg = (void *)hdr; + struct nvkm_subdev *subdev = &pmu->subdev; + u32 falcon_id = msg->falcon_id; + + if (falcon_id >= NVKM_FALCON_END) { + nvkm_error(subdev, "in bootstrap falcon callback:\n"); + nvkm_error(subdev, "invalid falcon ID 0x%x\n", falcon_id); + return; + } + nvkm_debug(subdev, "%s booted\n", nvkm_falcon_name[falcon_id]); +} + +enum { + ACR_CMD_BOOTSTRAP_FALCON_FLAGS_RESET_YES = 0, + ACR_CMD_BOOTSTRAP_FALCON_FLAGS_RESET_NO = 1, +}; + +static int +acr_boot_falcon(struct nvkm_pmu *pmu, enum nvkm_falconidx falcon) +{ + struct gm200_pmu *priv = gm200_pmu(pmu); + DECLARE_COMPLETION_ONSTACK(completed); + /* + * flags - Flag specifying RESET or no RESET. + * falcon id - Falcon id specifying falcon to bootstrap. + */ + struct { + struct pmu_hdr hdr; + u8 cmd_type; + u32 flags; + u32 falcon_id; + } cmd; + + if (!wait_for_completion_timeout(&priv->init_done, + msecs_to_jiffies(1000))) + return -ETIMEDOUT; + + memset(&cmd, 0, sizeof(cmd)); + + cmd.hdr.unit_id = PMU_UNIT_ACR; + cmd.hdr.size = sizeof(cmd); + cmd.cmd_type = PMU_ACR_CMD_ID_BOOTSTRAP_FALCON; + cmd.flags = ACR_CMD_BOOTSTRAP_FALCON_FLAGS_RESET_YES; + cmd.falcon_id = falcon; + nv_pmu_cmd_post(pmu, &cmd.hdr, NULL, PMU_COMMAND_QUEUE_HPQ, + acr_boot_falcon_callback, &completed); + + if (!wait_for_completion_timeout(&completed, msecs_to_jiffies(1000))) + return -ETIMEDOUT; + + return 0; +} + +const struct nv_pmu_acr_func +nv_0137c63d_acr_func = { + .init_wpr_region = acr_init_wpr, + .boot_falcon = acr_boot_falcon, +}; + +const struct nv_pmu_func +nv_0137c63d_func = { + .init = &nv_0137c63d_init_func, + .acr = &nv_0137c63d_acr_func, +}; diff --git a/drm/nouveau/nvkm/subdev/pmu/nv_pmu.h b/drm/nouveau/nvkm/subdev/pmu/nv_pmu.h index 4267952231f7..fb8168f3e564 100644 --- a/drm/nouveau/nvkm/subdev/pmu/nv_pmu.h +++ b/drm/nouveau/nvkm/subdev/pmu/nv_pmu.h @@ -44,6 +44,9 @@ enum nv_pmu_queue { PMU_MESSAGE_QUEUE = 4, }; +extern const struct nv_pmu_init_func nv_0137c63d_init_func; +extern const struct nv_pmu_acr_func nv_0137c63d_acr_func; + int nv_pmu_cmd_post(struct nvkm_pmu *, struct pmu_hdr *, struct pmu_hdr *, enum nv_pmu_queue, nv_pmu_callback, struct completion *); diff --git a/drm/nouveau/nvkm/subdev/pmu/priv.h b/drm/nouveau/nvkm/subdev/pmu/priv.h index b93b300a101a..4eb29c2ec785 100644 --- a/drm/nouveau/nvkm/subdev/pmu/priv.h +++ b/drm/nouveau/nvkm/subdev/pmu/priv.h @@ -55,5 +55,7 @@ struct nv_pmu_func { const struct nv_pmu_acr_func *acr; }; +extern const struct nv_pmu_func nv_0137c63d_func; + void gk110_pmu_pgob(struct nvkm_pmu *, bool); #endif -- git-series 0.8.10
Alexandre Courbot
2016-Nov-21 08:29 UTC
[Nouveau] [PATCH v4 27/33] secboot: add LS firmware post-run hooks
Add the ability for LS firmwares to declare a post-run hook that is invoked right after the HS firmware is executed. This allows them to e.g. write some initialization data into the falcon's DMEM. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/nvkm/subdev/secboot/acr_r352.c | 11 +++++++++++ drm/nouveau/nvkm/subdev/secboot/acr_r352.h | 2 ++ 2 files changed, 13 insertions(+), 0 deletions(-) diff --git a/drm/nouveau/nvkm/subdev/secboot/acr_r352.c b/drm/nouveau/nvkm/subdev/secboot/acr_r352.c index 534a2a5ec25b..fd316bd2f0a4 100644 --- a/drm/nouveau/nvkm/subdev/secboot/acr_r352.c +++ b/drm/nouveau/nvkm/subdev/secboot/acr_r352.c @@ -768,6 +768,8 @@ acr_r352_bootstrap(struct acr_r352 *acr, struct nvkm_secboot *sb) { struct nvkm_subdev *subdev = &sb->subdev; struct nvkm_device *device = subdev->device; + unsigned long managed_falcons = acr->base.managed_falcons; + int falcon_id; int ret; if (sb->wpr_set) @@ -787,6 +789,15 @@ acr_r352_bootstrap(struct acr_r352 *acr, struct nvkm_secboot *sb) sb->wpr_set = true; + /* Run LS firmwares post_run hooks */ + for_each_set_bit(falcon_id, &managed_falcons, NVKM_FALCON_END) { + const struct acr_r352_ls_func *func + acr->func->ls_func[falcon_id]; + + if (func->post_run) + func->post_run(&acr->base, sb); + } + return 0; } diff --git a/drm/nouveau/nvkm/subdev/secboot/acr_r352.h b/drm/nouveau/nvkm/subdev/secboot/acr_r352.h index b92125abfc7b..5962a45ec809 100644 --- a/drm/nouveau/nvkm/subdev/secboot/acr_r352.h +++ b/drm/nouveau/nvkm/subdev/secboot/acr_r352.h @@ -157,6 +157,7 @@ struct hsf_load_header { * @generate_bl_desc: function called on a block of bl_desc_size to generate the * proper bootloader descriptor for this LS firmware * @bl_desc_size: size of the bootloader descriptor + * @post_run: hook called right after the ACR is executed * @lhdr_flags: LS flags */ struct acr_r352_ls_func { @@ -164,6 +165,7 @@ struct acr_r352_ls_func { void (*generate_bl_desc)(const struct nvkm_acr *, const struct ls_ucode_img *, u64, void *); u32 bl_desc_size; + void (*post_run)(const struct nvkm_acr *, const struct nvkm_secboot *); u32 lhdr_flags; }; -- git-series 0.8.10
Alexandre Courbot
2016-Nov-21 08:29 UTC
[Nouveau] [PATCH v4 28/33] secboot: support for loading LS PMU firmware
Allow secboot to load a LS PMU firmware. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/nvkm/subdev/secboot/Kbuild | 1 +- drm/nouveau/nvkm/subdev/secboot/ls_ucode.h | 4 +- drm/nouveau/nvkm/subdev/secboot/ls_ucode_pmu.c | 89 +++++++++++++++++++- 3 files changed, 93 insertions(+), 1 deletion(-) create mode 100644 drm/nouveau/nvkm/subdev/secboot/ls_ucode_pmu.c diff --git a/drm/nouveau/nvkm/subdev/secboot/Kbuild b/drm/nouveau/nvkm/subdev/secboot/Kbuild index 5076d1500f47..094b6801f9e8 100644 --- a/drm/nouveau/nvkm/subdev/secboot/Kbuild +++ b/drm/nouveau/nvkm/subdev/secboot/Kbuild @@ -1,5 +1,6 @@ nvkm-y += nvkm/subdev/secboot/base.o nvkm-y += nvkm/subdev/secboot/ls_ucode_gr.o +nvkm-y += nvkm/subdev/secboot/ls_ucode_pmu.o nvkm-y += nvkm/subdev/secboot/acr.o nvkm-y += nvkm/subdev/secboot/acr_r352.o nvkm-y += nvkm/subdev/secboot/acr_r361.o diff --git a/drm/nouveau/nvkm/subdev/secboot/ls_ucode.h b/drm/nouveau/nvkm/subdev/secboot/ls_ucode.h index 7f4292f740b5..381f07b1216c 100644 --- a/drm/nouveau/nvkm/subdev/secboot/ls_ucode.h +++ b/drm/nouveau/nvkm/subdev/secboot/ls_ucode.h @@ -27,6 +27,7 @@ #include <core/falcon.h> #include <core/subdev.h> +struct nvkm_acr; /** * struct ls_ucode_img_desc - descriptor of firmware image @@ -146,6 +147,7 @@ struct fw_bl_desc { int acr_ls_ucode_load_fecs(const struct nvkm_subdev *, struct ls_ucode_img *); int acr_ls_ucode_load_gpccs(const struct nvkm_subdev *, struct ls_ucode_img *); - +int acr_ls_ucode_load_pmu(const struct nvkm_subdev *, struct ls_ucode_img *); +void acr_ls_pmu_post_run(const struct nvkm_acr *, const struct nvkm_secboot *); #endif diff --git a/drm/nouveau/nvkm/subdev/secboot/ls_ucode_pmu.c b/drm/nouveau/nvkm/subdev/secboot/ls_ucode_pmu.c new file mode 100644 index 000000000000..94605ac27341 --- /dev/null +++ b/drm/nouveau/nvkm/subdev/secboot/ls_ucode_pmu.c @@ -0,0 +1,89 @@ +/* + * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + * DEALINGS IN THE SOFTWARE. + */ + + +#include "ls_ucode.h" +#include "acr.h" + +#include <core/firmware.h> +#include <subdev/pmu.h> + +/** + * ls_ucode_img_load_pmu - load and prepare a LS ucode img for PMU falcon + * + * Load the PMU LS microcode, desc and signature and pack them into a single + * blob. + */ +int +acr_ls_ucode_load_pmu(const struct nvkm_subdev *subdev, + struct ls_ucode_img *img) +{ + const struct firmware *pmu_fw, *pmu_desc, *sig; + struct nvkm_pmu *pmu = subdev->device->pmu; + int ret; + + ret = nvkm_firmware_get(subdev->device, "pmu/image", &pmu_fw); + if (ret) + return ret; + img->ucode_data = kmemdup(pmu_fw->data, pmu_fw->size, GFP_KERNEL); + nvkm_firmware_put(pmu_fw); + if (!img->ucode_data) + return -ENOMEM; + + ret = nvkm_firmware_get(subdev->device, "pmu/desc", &pmu_desc); + if (ret) + return ret; + memcpy(&img->ucode_desc, pmu_desc->data, sizeof(img->ucode_desc)); + img->ucode_size = img->ucode_desc.image_size; + nvkm_firmware_put(pmu_desc); + + ret = nvkm_firmware_get(subdev->device, "pmu/sig", &sig); + if (ret) + return ret; + img->sig_size = sig->size; + img->sig = kmemdup(sig->data, sig->size, GFP_KERNEL); + nvkm_firmware_put(sig); + if (!img->sig) + return -ENOMEM; + + ret = nvkm_pmu_set_version(pmu, img->ucode_desc.app_version); + if (ret) + return ret; + + return 0; +} + +void +acr_ls_pmu_post_run(const struct nvkm_acr *acr, const struct nvkm_secboot *sb) +{ + struct nvkm_device *device = sb->subdev.device; + struct nvkm_pmu *pmu = device->pmu; + u32 cmdline_size = nvkm_pmu_cmdline_size(pmu); + u8 buf[cmdline_size]; + u32 addr_args = acr->dmem_size - cmdline_size; + + if (cmdline_size == 0) + return; + + nvkm_pmu_write_cmdline(pmu, buf); + nvkm_falcon_load_dmem(device, sb->base, buf, addr_args, cmdline_size); +} -- git-series 0.8.10
Alexandre Courbot
2016-Nov-21 08:29 UTC
[Nouveau] [PATCH v4 29/33] secboot: base support for PMU falcon
Adapt secboot's behavior if a PMU firmware is present, in particular the way LS falcons are reset. Without PMU firmware, secboot needs to be performed again from scratch so all LS falcons are reset. With PMU firmware, we can ask the PMU's ACR unit to reset a specific falcon through a PMU message. As we must preserve the old behavior to avoid breaking user-space, add a few conditionals to the way falcons are reset. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/nvkm/subdev/secboot/acr_r352.c | 97 +++++++++++++++++++---- 1 file changed, 82 insertions(+), 15 deletions(-) diff --git a/drm/nouveau/nvkm/subdev/secboot/acr_r352.c b/drm/nouveau/nvkm/subdev/secboot/acr_r352.c index fd316bd2f0a4..bfcfb647f4ad 100644 --- a/drm/nouveau/nvkm/subdev/secboot/acr_r352.c +++ b/drm/nouveau/nvkm/subdev/secboot/acr_r352.c @@ -24,6 +24,8 @@ #include <core/gpuobj.h> #include <core/firmware.h> +#include <subdev/mc.h> +#include <subdev/pmu.h> /** * struct hsf_fw_header - HS firmware descriptor @@ -780,7 +782,7 @@ acr_r352_bootstrap(struct acr_r352 *acr, struct nvkm_secboot *sb) if (ret) return ret; - nvkm_debug(&sb->subdev, "running HS load blob\n"); + nvkm_debug(subdev, "running HS load blob\n"); ret = sb->func->run_blob(sb, acr->load_blob); nvkm_secboot_falcon_clear_halt_interrupt(device, sb->base); if (ret) @@ -798,6 +800,50 @@ acr_r352_bootstrap(struct acr_r352 *acr, struct nvkm_secboot *sb) func->post_run(&acr->base, sb); } + /* Re-start ourselves if we are managed */ + if (!nvkm_secboot_is_managed(sb, acr->base.boot_falcon)) + return 0; + + /* Enable interrupts */ + nvkm_wr32(device, sb->base + 0x10, 0xff); + nvkm_mc_intr_mask(device, sb->devidx, true); + + /* Start PMU */ + nvkm_secboot_start(sb, acr->base.boot_falcon); + nvkm_debug(subdev, "PMU started\n"); + + return 0; +} + +/** + * acr_r352_reset_nopmu - dummy reset method when no PMU firmware is loaded + * + * Reset is done by re-executing secure boot from scratch, with lazy bootstrap + * disabled. This has the effect of making all managed falcons ready-to-run. + */ +static int +acr_r352_reset_nopmu(struct acr_r352 *acr, struct nvkm_secboot *sb, + enum nvkm_falconidx falcon) +{ + int ret; + + /* + * Perform secure boot each time we are called on FECS. Since only FECS + * and GPCCS are managed and started together, this ought to be safe. + */ + if (falcon != NVKM_FALCON_FECS) + goto end; + + ret = acr_r352_shutdown(acr, sb); + if (ret) + return ret; + + ret = acr_r352_bootstrap(acr, sb); + if (ret) + return ret; + +end: + acr->falcon_state[falcon] = RESET; return 0; } @@ -813,29 +859,30 @@ acr_r352_reset(struct nvkm_acr *_acr, struct nvkm_secboot *sb, enum nvkm_falconidx falcon) { struct acr_r352 *acr = acr_r352(_acr); + struct nvkm_pmu *pmu = sb->subdev.device->pmu; + const char *fname = nvkm_falcon_name[falcon]; int ret; + /* Not self-managed? Redo secure boot entirely */ + if (!nvkm_secboot_is_managed(sb, _acr->boot_falcon)) + return acr_r352_reset_nopmu(acr, sb, falcon); + /* - * Dummy GM200 implementation: perform secure boot each time we are - * called on FECS. Since only FECS and GPCCS are managed and started - * together, this ought to be safe. - * - * Once we have proper PMU firmware and support, this will be changed - * to a proper call to the PMU method. + * Otherwise ensure secure boot is done, and command the PMU to reset + * the desired falcon. */ - if (falcon != NVKM_FALCON_FECS) - goto end; - - ret = acr_r352_shutdown(acr, sb); + ret = acr_r352_bootstrap(acr, sb); if (ret) return ret; - acr_r352_bootstrap(acr, sb); - if (ret) + nvkm_debug(&sb->subdev, "resetting %s falcon\n", fname); + ret = nvkm_pmu_acr_boot_falcon(pmu, falcon); + if (ret) { + nvkm_error(&sb->subdev, "cannot boot %s falcon\n", fname); return ret; + } + nvkm_debug(&sb->subdev, "falcon %s reset\n", fname); -end: - acr->falcon_state[falcon] = RESET; return 0; } @@ -854,6 +901,9 @@ acr_r352_start(struct nvkm_acr *_acr, struct nvkm_secboot *sb, case NVKM_FALCON_GPCCS: base = 0x41a000; break; + case NVKM_FALCON_PMU: + base = 0x10a000; + break; default: nvkm_error(subdev, "cannot start unhandled falcon!\n"); return -EINVAL; @@ -940,6 +990,23 @@ acr_r352_new_(const struct acr_r352_func *func, enum nvkm_falconidx boot_falcon, acr->base.func = &acr_r352_base_func; acr->func = func; + /* + * If we have a PMU firmware, let it manage the bootstrap of other + * falcons. + */ + if (func->ls_func[NVKM_FALCON_PMU] && + (managed_falcons & BIT(NVKM_FALCON_PMU))) { + int i; + + for (i = 0; i < NVKM_FALCON_END; i++) { + if (i == NVKM_FALCON_PMU) + continue; + + if (func->ls_func[i]) + acr->lazy_bootstrap |= BIT(i); + } + } + return &acr->base; } -- git-series 0.8.10
Alexandre Courbot
2016-Nov-21 08:29 UTC
[Nouveau] [PATCH v4 30/33] secboot: write PMU firmware version into register
The PMU firmware expects its version into this register. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/include/nvkm/subdev/pmu.h | 1 + drm/nouveau/nvkm/subdev/pmu/base.c | 1 + drm/nouveau/nvkm/subdev/secboot/acr_r352.c | 3 +++ 3 files changed, 5 insertions(+), 0 deletions(-) diff --git a/drm/nouveau/include/nvkm/subdev/pmu.h b/drm/nouveau/include/nvkm/subdev/pmu.h index 151003e0500e..71ccb70166d9 100644 --- a/drm/nouveau/include/nvkm/subdev/pmu.h +++ b/drm/nouveau/include/nvkm/subdev/pmu.h @@ -7,6 +7,7 @@ struct nvkm_pmu { const struct nvkm_pmu_func *func; const struct nv_pmu_func *nv_func; struct nvkm_subdev subdev; + u32 fw_version; struct { u32 base; diff --git a/drm/nouveau/nvkm/subdev/pmu/base.c b/drm/nouveau/nvkm/subdev/pmu/base.c index 0ca9cafb18c0..517f7942c57e 100644 --- a/drm/nouveau/nvkm/subdev/pmu/base.c +++ b/drm/nouveau/nvkm/subdev/pmu/base.c @@ -165,6 +165,7 @@ nvkm_pmu_set_version(struct nvkm_pmu *pmu, u32 version) return -EINVAL; }; + pmu->fw_version = version; nvkm_debug(subdev, "firmware version: 0x%08x\n", version); return 0; diff --git a/drm/nouveau/nvkm/subdev/secboot/acr_r352.c b/drm/nouveau/nvkm/subdev/secboot/acr_r352.c index bfcfb647f4ad..27b16cb2cfe5 100644 --- a/drm/nouveau/nvkm/subdev/secboot/acr_r352.c +++ b/drm/nouveau/nvkm/subdev/secboot/acr_r352.c @@ -770,6 +770,7 @@ acr_r352_bootstrap(struct acr_r352 *acr, struct nvkm_secboot *sb) { struct nvkm_subdev *subdev = &sb->subdev; struct nvkm_device *device = subdev->device; + struct nvkm_pmu *pmu = device->pmu; unsigned long managed_falcons = acr->base.managed_falcons; int falcon_id; int ret; @@ -804,6 +805,8 @@ acr_r352_bootstrap(struct acr_r352 *acr, struct nvkm_secboot *sb) if (!nvkm_secboot_is_managed(sb, acr->base.boot_falcon)) return 0; + nvkm_wr32(device, sb->base + 0x080, pmu->fw_version); + /* Enable interrupts */ nvkm_wr32(device, sb->base + 0x10, 0xff); nvkm_mc_intr_mask(device, sb->devidx, true); -- git-series 0.8.10
Alexandre Courbot
2016-Nov-21 08:29 UTC
[Nouveau] [PATCH v4 31/33] secboot: enable PMU in r352 ACR
Add the PMU bootloader generator and PMU LS ops that will enable proper PMU operation if the PMU falcon is designated as managed. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/nvkm/subdev/secboot/acr_r352.c | 80 +++++++++++++++++++++++- 1 file changed, 80 insertions(+), 0 deletions(-) diff --git a/drm/nouveau/nvkm/subdev/secboot/acr_r352.c b/drm/nouveau/nvkm/subdev/secboot/acr_r352.c index 27b16cb2cfe5..c9091483d45d 100644 --- a/drm/nouveau/nvkm/subdev/secboot/acr_r352.c +++ b/drm/nouveau/nvkm/subdev/secboot/acr_r352.c @@ -956,6 +956,85 @@ acr_r352_ls_gpccs_func = { .lhdr_flags = LSF_FLAG_FORCE_PRIV_LOAD, }; + + +/** + * struct acr_r352_pmu_bl_desc - PMU DMEM bootloader descriptor + * @dma_idx: DMA context to be used by BL while loading code/data + * @code_dma_base: 256B-aligned Physical FB Address where code is located + * @total_code_size: total size of the code part in the ucode + * @code_size_to_load: size of the code part to load in PMU IMEM. + * @code_entry_point: entry point in the code. + * @data_dma_base: Physical FB address where data part of ucode is located + * @data_size: Total size of the data portion. + * @overlay_dma_base: Physical Fb address for resident code present in ucode + * @argc: Total number of args + * @argv: offset where args are copied into PMU's DMEM. + * + * Structure used by the PMU bootloader to load the rest of the code + */ +struct acr_r352_pmu_bl_desc { + u32 dma_idx; + u32 code_dma_base; + u32 code_size_total; + u32 code_size_to_load; + u32 code_entry_point; + u32 data_dma_base; + u32 data_size; + u32 overlay_dma_base; + u32 argc; + u32 argv; + u16 code_dma_base1; + u16 data_dma_base1; + u16 overlay_dma_base1; +}; + +/** + * acr_r352_generate_pmu_bl_desc() - populate a DMEM BL descriptor for PMU LS image + * + */ +static void +acr_r352_generate_pmu_bl_desc(const struct nvkm_acr *acr, + const struct ls_ucode_img *_img, u64 wpr_addr, + void *_desc) +{ + struct ls_ucode_img_r352 *img = ls_ucode_img_r352(_img); + const struct ls_ucode_img_desc *pdesc = &_img->ucode_desc; + struct acr_r352_pmu_bl_desc *desc = _desc; + struct nvkm_pmu *pmu = acr->subdev->device->pmu; + u64 base; + u64 addr_code; + u64 addr_data; + u32 addr_args; + + base = wpr_addr + img->lsb_header.ucode_off + pdesc->app_start_offset; + addr_code = (base + pdesc->app_resident_code_offset) >> 8; + addr_data = (base + pdesc->app_resident_data_offset) >> 8; + addr_args = acr->dmem_size - nvkm_pmu_cmdline_size(pmu); + + desc->dma_idx = FALCON_DMAIDX_UCODE; + desc->code_dma_base = lower_32_bits(addr_code); + desc->code_dma_base1 = upper_32_bits(addr_code); + desc->code_size_total = pdesc->app_size; + desc->code_size_to_load = pdesc->app_resident_code_size; + desc->code_entry_point = pdesc->app_imem_entry; + desc->data_dma_base = lower_32_bits(addr_data); + desc->data_dma_base1 = upper_32_bits(addr_data); + desc->data_size = pdesc->app_resident_data_size; + desc->overlay_dma_base = lower_32_bits(addr_code); + desc->overlay_dma_base1 = upper_32_bits(addr_code); + desc->argc = 1; + desc->argv = addr_args; +} + +static const struct acr_r352_ls_func +acr_r352_ls_pmu_func = { + .load = acr_ls_ucode_load_pmu, + .generate_bl_desc = acr_r352_generate_pmu_bl_desc, + .bl_desc_size = sizeof(struct acr_r352_pmu_bl_desc), + .post_run = acr_ls_pmu_post_run, +}; + const struct acr_r352_func acr_r352_func = { .generate_hs_bl_desc = acr_r352_generate_hs_bl_desc, @@ -966,6 +1045,7 @@ acr_r352_func = { .ls_func = { [NVKM_FALCON_FECS] = &acr_r352_ls_fecs_func, [NVKM_FALCON_GPCCS] = &acr_r352_ls_gpccs_func, + [NVKM_FALCON_PMU] = &acr_r352_ls_pmu_func, }, }; -- git-series 0.8.10
Alexandre Courbot
2016-Nov-21 08:29 UTC
[Nouveau] [PATCH v4 32/33] secboot: support optional falcons
PMU support has been enabled for r352 ACR, but it must remain optional if we want to preserve existing user-space that do not include it. Allow ACR to be instanciated with a list of optional LS falcons, that will not produce a fatal error if their firmware is not loaded. Also change the secure boot bootstrap logic to be able to fall back to legacy behavior if it turns out the boot falcon's LS firmware cannot be loaded. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/nvkm/subdev/secboot/acr.h | 2 +- drm/nouveau/nvkm/subdev/secboot/acr_r352.c | 61 +++++++++++++---------- 2 files changed, 38 insertions(+), 25 deletions(-) diff --git a/drm/nouveau/nvkm/subdev/secboot/acr.h b/drm/nouveau/nvkm/subdev/secboot/acr.h index 175f14fbda61..88825f6e0cb1 100644 --- a/drm/nouveau/nvkm/subdev/secboot/acr.h +++ b/drm/nouveau/nvkm/subdev/secboot/acr.h @@ -50,6 +50,7 @@ struct nvkm_acr_func { * * @boot_falcon: ID of the falcon that will perform secure boot * @managed_falcons: bitfield of falcons managed by this ACR + * @optional_falcons: bitfield of falcons we can live without * @start_address: virtual start address of the HS bootloader * @dmem_size: size of DMEM of the managing falcon */ @@ -59,6 +60,7 @@ struct nvkm_acr { enum nvkm_falconidx boot_falcon; unsigned long managed_falcons; + unsigned long optional_falcons; u32 start_address; u32 dmem_size; }; diff --git a/drm/nouveau/nvkm/subdev/secboot/acr_r352.c b/drm/nouveau/nvkm/subdev/secboot/acr_r352.c index c9091483d45d..4eab1bbe98a2 100644 --- a/drm/nouveau/nvkm/subdev/secboot/acr_r352.c +++ b/drm/nouveau/nvkm/subdev/secboot/acr_r352.c @@ -412,6 +412,12 @@ acr_r352_prepare_ls_blob(struct acr_r352 *acr, u64 wpr_addr, u32 wpr_size) img = acr->func->ls_ucode_img_load(acr, falcon_id); if (IS_ERR(img)) { + if (acr->base.optional_falcons & BIT(falcon_id)) { + managed_falcons &= ~BIT(falcon_id); + nvkm_info(subdev, "skipping %s falcon...\n", + nvkm_falcon_name[falcon_id]); + continue; + } ret = PTR_ERR(img); goto cleanup; } @@ -420,6 +426,23 @@ acr_r352_prepare_ls_blob(struct acr_r352 *acr, u64 wpr_addr, u32 wpr_size) managed_count++; } + /* Commit the actual list of falcons we will manage from now on */ + acr->base.managed_falcons = managed_falcons; + + /* + * If we have a PMU firmware, let it manage the bootstrap of other + * falcons. + */ + if (acr->func->ls_func[acr->base.boot_falcon] && + (managed_falcons & BIT(acr->base.boot_falcon))) { + for_each_set_bit(falcon_id, &managed_falcons, NVKM_FALCON_END) { + if (falcon_id == acr->base.boot_falcon) + continue; + + acr->lazy_bootstrap |= BIT(falcon_id); + } + } + /* * Fill the WPR and LSF headers with the right offsets and compute * required WPR size @@ -864,20 +887,25 @@ acr_r352_reset(struct nvkm_acr *_acr, struct nvkm_secboot *sb, struct acr_r352 *acr = acr_r352(_acr); struct nvkm_pmu *pmu = sb->subdev.device->pmu; const char *fname = nvkm_falcon_name[falcon]; + bool wpr_already_set = sb->wpr_set; int ret; - /* Not self-managed? Redo secure boot entirely */ - if (!nvkm_secboot_is_managed(sb, _acr->boot_falcon)) - return acr_r352_reset_nopmu(acr, sb, falcon); - - /* - * Otherwise ensure secure boot is done, and command the PMU to reset - * the desired falcon. - */ + /* Make sure secure boot is performed */ ret = acr_r352_bootstrap(acr, sb); if (ret) return ret; + /* No PMU interface? */ + if (!nvkm_secboot_is_managed(sb, _acr->boot_falcon)) { + /* Redo secure boot entirely if it was already done */ + if (wpr_already_set) + return acr_r352_reset_nopmu(acr, sb, falcon); + /* Else return the result of the initial invokation */ + else + return ret; + } + + /* Otherwise just ask the PMU to reset the falcon */ nvkm_debug(&sb->subdev, "resetting %s falcon\n", fname); ret = nvkm_pmu_acr_boot_falcon(pmu, falcon); if (ret) { @@ -1073,23 +1101,6 @@ acr_r352_new_(const struct acr_r352_func *func, enum nvkm_falconidx boot_falcon, acr->base.func = &acr_r352_base_func; acr->func = func; - /* - * If we have a PMU firmware, let it manage the bootstrap of other - * falcons. - */ - if (func->ls_func[NVKM_FALCON_PMU] && - (managed_falcons & BIT(NVKM_FALCON_PMU))) { - int i; - - for (i = 0; i < NVKM_FALCON_END; i++) { - if (i == NVKM_FALCON_PMU) - continue; - - if (func->ls_func[i]) - acr->lazy_bootstrap |= BIT(i); - } - } - return &acr->base; } -- git-series 0.8.10
Enable the PMU in GM20B, managed by secure boot. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/nvkm/engine/device/base.c | 1 + drm/nouveau/nvkm/subdev/secboot/gm20b.c | 4 +++- 2 files changed, 4 insertions(+), 1 deletion(-) diff --git a/drm/nouveau/nvkm/engine/device/base.c b/drm/nouveau/nvkm/engine/device/base.c index 2cbcffe78c3e..b84f762a5a1a 100644 --- a/drm/nouveau/nvkm/engine/device/base.c +++ b/drm/nouveau/nvkm/engine/device/base.c @@ -2139,6 +2139,7 @@ nv12b_chipset = { .mc = gk20a_mc_new, .mmu = gf100_mmu_new, .secboot = gm20b_secboot_new, + .pmu = gm200_pmu_new, .timer = gk20a_timer_new, .top = gk104_top_new, .ce[2] = gm200_ce_new, diff --git a/drm/nouveau/nvkm/subdev/secboot/gm20b.c b/drm/nouveau/nvkm/subdev/secboot/gm20b.c index 6bd3aff1ffb1..b3f46a324691 100644 --- a/drm/nouveau/nvkm/subdev/secboot/gm20b.c +++ b/drm/nouveau/nvkm/subdev/secboot/gm20b.c @@ -107,9 +107,11 @@ gm20b_secboot_new(struct nvkm_device *device, int index, struct gm200_secboot *gsb; struct nvkm_acr *acr; - acr = acr_r352_new(BIT(NVKM_FALCON_FECS)); + acr = acr_r352_new(BIT(NVKM_FALCON_FECS) | BIT(NVKM_FALCON_PMU)); if (IS_ERR(acr)) return PTR_ERR(acr); + /* Support the initial GM20B firmware release without PMU */ + acr->optional_falcons = BIT(NVKM_FALCON_PMU); gsb = kzalloc(sizeof(*gsb), GFP_KERNEL); if (!gsb) { -- git-series 0.8.10
Alexandre Courbot
2016-Nov-21 08:32 UTC
[Nouveau] [PATCH v4 8/33] secboot: reorganize into more files
Split the act of building the ACR blob from firmware files from the rest of the (chip-dependent) secure boot logic. ACR logic is moved into acr_rxxx.c files, where rxxx corresponds to the compatible release of the NVIDIA driver. At the moment r352 and r361 are supported since firmwares have been released for these versions. Some abstractions are added on top of r352 so r361 can easily be implemented on top of it by just overriding a few hooks. This split makes it possible and easy to reuse the same ACR version on different chips. It also hopefully makes the code much more readable as the different secure boot logics are separated. As more chips and firmware versions will be supported, this is a necessity to not get lost in code that is already quite complex. This is a big commit, but it essentially moves things around (and split the nvkm_secboot structure into two, nvkm_secboot and nvkm_acr). Code semantics should not be affected. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/include/nvkm/subdev/secboot.h | 11 +- drm/nouveau/nvkm/subdev/secboot/Kbuild | 4 +- drm/nouveau/nvkm/subdev/secboot/acr.c | 54 +- drm/nouveau/nvkm/subdev/secboot/acr.h | 69 +- drm/nouveau/nvkm/subdev/secboot/acr_r352.c | 912 ++++++++++++++- drm/nouveau/nvkm/subdev/secboot/acr_r352.h | 126 ++- drm/nouveau/nvkm/subdev/secboot/acr_r361.c | 132 ++- drm/nouveau/nvkm/subdev/secboot/base.c | 96 +- drm/nouveau/nvkm/subdev/secboot/gm200.c | 1201 +------------------ drm/nouveau/nvkm/subdev/secboot/gm200.h | 43 +- drm/nouveau/nvkm/subdev/secboot/gm20b.c | 125 +-- drm/nouveau/nvkm/subdev/secboot/ls_ucode.h | 245 ++++- drm/nouveau/nvkm/subdev/secboot/ls_ucode_gr.c | 165 ++- drm/nouveau/nvkm/subdev/secboot/priv.h | 326 +----- 14 files changed, 1898 insertions(+), 1611 deletions(-) create mode 100644 drm/nouveau/nvkm/subdev/secboot/acr.c create mode 100644 drm/nouveau/nvkm/subdev/secboot/acr.h create mode 100644 drm/nouveau/nvkm/subdev/secboot/acr_r352.c create mode 100644 drm/nouveau/nvkm/subdev/secboot/acr_r352.h create mode 100644 drm/nouveau/nvkm/subdev/secboot/acr_r361.c create mode 100644 drm/nouveau/nvkm/subdev/secboot/gm200.h create mode 100644 drm/nouveau/nvkm/subdev/secboot/ls_ucode.h create mode 100644 drm/nouveau/nvkm/subdev/secboot/ls_ucode_gr.c diff --git a/drm/nouveau/include/nvkm/subdev/secboot.h b/drm/nouveau/include/nvkm/subdev/secboot.h index ffc2204d2a50..d93161090233 100644 --- a/drm/nouveau/include/nvkm/subdev/secboot.h +++ b/drm/nouveau/include/nvkm/subdev/secboot.h @@ -27,16 +27,21 @@ #include <core/falcon.h> /** - * @base: base IO address of the falcon performing secure boot - * @irq_mask: IRQ mask of the falcon performing secure boot - * @enable_mask: enable mask of the falcon performing secure boot + * @base: base IO address of the falcon performing secure boot + * @debug_mode: whether the debug or production signatures should be used */ struct nvkm_secboot { const struct nvkm_secboot_func *func; + struct nvkm_acr *acr; struct nvkm_subdev subdev; enum nvkm_devidx devidx; u32 base; + + u64 wpr_addr; + u32 wpr_size; + + bool debug_mode; }; #define nvkm_secboot(p) container_of((p), struct nvkm_secboot, subdev) diff --git a/drm/nouveau/nvkm/subdev/secboot/Kbuild b/drm/nouveau/nvkm/subdev/secboot/Kbuild index b02b868a6589..5076d1500f47 100644 --- a/drm/nouveau/nvkm/subdev/secboot/Kbuild +++ b/drm/nouveau/nvkm/subdev/secboot/Kbuild @@ -1,3 +1,7 @@ nvkm-y += nvkm/subdev/secboot/base.o +nvkm-y += nvkm/subdev/secboot/ls_ucode_gr.o +nvkm-y += nvkm/subdev/secboot/acr.o +nvkm-y += nvkm/subdev/secboot/acr_r352.o +nvkm-y += nvkm/subdev/secboot/acr_r361.o nvkm-y += nvkm/subdev/secboot/gm200.o nvkm-y += nvkm/subdev/secboot/gm20b.o diff --git a/drm/nouveau/nvkm/subdev/secboot/acr.c b/drm/nouveau/nvkm/subdev/secboot/acr.c new file mode 100644 index 000000000000..75dc06557877 --- /dev/null +++ b/drm/nouveau/nvkm/subdev/secboot/acr.c @@ -0,0 +1,54 @@ +/* + * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + * DEALINGS IN THE SOFTWARE. + */ + +#include "acr.h" + +#include <core/firmware.h> + +/** + * Convenience function to duplicate a firmware file in memory and check that + * it has the required minimum size. + */ +void * +nvkm_acr_load_firmware(const struct nvkm_subdev *subdev, const char *name, + size_t min_size) +{ + const struct firmware *fw; + void *blob; + int ret; + + ret = nvkm_firmware_get(subdev->device, name, &fw); + if (ret) + return ERR_PTR(ret); + if (fw->size < min_size) { + nvkm_error(subdev, "%s is smaller than expected size %zu\n", + name, min_size); + nvkm_firmware_put(fw); + return ERR_PTR(-EINVAL); + } + blob = kmemdup(fw->data, fw->size, GFP_KERNEL); + nvkm_firmware_put(fw); + if (!blob) + return ERR_PTR(-ENOMEM); + + return blob; +} diff --git a/drm/nouveau/nvkm/subdev/secboot/acr.h b/drm/nouveau/nvkm/subdev/secboot/acr.h new file mode 100644 index 000000000000..7ce11379f6f7 --- /dev/null +++ b/drm/nouveau/nvkm/subdev/secboot/acr.h @@ -0,0 +1,69 @@ +/* + * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + * DEALINGS IN THE SOFTWARE. + */ +#ifndef __NVKM_SECBOOT_ACR_H__ +#define __NVKM_SECBOOT_ACR_H__ + +#include "priv.h" + +struct nvkm_acr; + +/** + * struct nvkm_acr_func - properties and functions specific to an ACR + * + * @load: make the ACR ready to run on the given secboot device + * @reset: reset the specified falcon + * @start: start the specified falcon (assumed to have been reset) + */ +struct nvkm_acr_func { + void (*dtor)(struct nvkm_acr *); + int (*oneinit)(struct nvkm_acr *, struct nvkm_secboot *); + int (*fini)(struct nvkm_acr *, struct nvkm_secboot *, bool); + int (*load)(struct nvkm_acr *, struct nvkm_secboot *, + struct nvkm_gpuobj *, u64); + int (*reset)(struct nvkm_acr *, struct nvkm_secboot *, + enum nvkm_falconidx); + int (*start)(struct nvkm_acr *, struct nvkm_secboot *, + enum nvkm_falconidx); +}; + +/** + * struct nvkm_acr - instance of an ACR + * + * @boot_falcon: ID of the falcon that will perform secure boot + * @managed_falcons: bitfield of falcons managed by this ACR + * @start_address: virtual start address of the HS bootloader + */ +struct nvkm_acr { + const struct nvkm_acr_func *func; + const struct nvkm_subdev *subdev; + + enum nvkm_falconidx boot_falcon; + unsigned long managed_falcons; + u32 start_address; +}; + +void *nvkm_acr_load_firmware(const struct nvkm_subdev *, const char *, size_t); + +struct nvkm_acr *acr_r352_new(unsigned long); +struct nvkm_acr *acr_r361_new(unsigned long); + +#endif diff --git a/drm/nouveau/nvkm/subdev/secboot/acr_r352.c b/drm/nouveau/nvkm/subdev/secboot/acr_r352.c new file mode 100644 index 000000000000..5622ae9c1a1e --- /dev/null +++ b/drm/nouveau/nvkm/subdev/secboot/acr_r352.c @@ -0,0 +1,912 @@ +/* + * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + * DEALINGS IN THE SOFTWARE. + */ + +#include "acr_r352.h" +#include "ls_ucode.h" + +#include <core/gpuobj.h> +#include <core/firmware.h> + +/** + * struct hsf_fw_header - HS firmware descriptor + * @sig_dbg_offset: offset of the debug signature + * @sig_dbg_size: size of the debug signature + * @sig_prod_offset: offset of the production signature + * @sig_prod_size: size of the production signature + * @patch_loc: offset of the offset (sic) of where the signature is + * @patch_sig: offset of the offset (sic) to add to sig_*_offset + * @hdr_offset: offset of the load header (see struct hs_load_header) + * @hdr_size: size of above header + * + * This structure is embedded in the HS firmware image at + * hs_bin_hdr.header_offset. + */ +struct hsf_fw_header { + u32 sig_dbg_offset; + u32 sig_dbg_size; + u32 sig_prod_offset; + u32 sig_prod_size; + u32 patch_loc; + u32 patch_sig; + u32 hdr_offset; + u32 hdr_size; +}; + +/** + * struct acr_r352_flcn_bl_desc - DMEM bootloader descriptor + * @signature: 16B signature for secure code. 0s if no secure code + * @ctx_dma: DMA context to be used by BL while loading code/data + * @code_dma_base: 256B-aligned Physical FB Address where code is located + * (falcon's $xcbase register) + * @non_sec_code_off: offset from code_dma_base where the non-secure code is + * located. The offset must be multiple of 256 to help perf + * @non_sec_code_size: the size of the nonSecure code part. + * @sec_code_off: offset from code_dma_base where the secure code is + * located. The offset must be multiple of 256 to help perf + * @sec_code_size: offset from code_dma_base where the secure code is + * located. The offset must be multiple of 256 to help perf + * @code_entry_point: code entry point which will be invoked by BL after + * code is loaded. + * @data_dma_base: 256B aligned Physical FB Address where data is located. + * (falcon's $xdbase register) + * @data_size: size of data block. Should be multiple of 256B + * + * Structure used by the bootloader to load the rest of the code. This has + * to be filled by host and copied into DMEM at offset provided in the + * hsflcn_bl_desc.bl_desc_dmem_load_off. + */ +struct acr_r352_flcn_bl_desc { + u32 reserved[4]; + u32 signature[4]; + u32 ctx_dma; + u32 code_dma_base; + u32 non_sec_code_off; + u32 non_sec_code_size; + u32 sec_code_off; + u32 sec_code_size; + u32 code_entry_point; + u32 data_dma_base; + u32 data_size; +}; + +/** + * acr_r352_generate_flcn_bl_desc - generate generic BL descriptor for LS image + */ +static void +acr_r352_generate_flcn_bl_desc(const struct nvkm_acr *acr, + const struct ls_ucode_img *img, u64 wpr_addr, + void *_desc) +{ + struct acr_r352_flcn_bl_desc *desc = _desc; + const struct ls_ucode_img_desc *pdesc = &img->ucode_desc; + u64 base, addr_code, addr_data; + + base = wpr_addr + img->lsb_header.ucode_off + pdesc->app_start_offset; + addr_code = (base + pdesc->app_resident_code_offset) >> 8; + addr_data = (base + pdesc->app_resident_data_offset) >> 8; + + memset(desc, 0, sizeof(*desc)); + desc->ctx_dma = FALCON_DMAIDX_UCODE; + desc->code_dma_base = lower_32_bits(addr_code); + desc->non_sec_code_off = pdesc->app_resident_code_offset; + desc->non_sec_code_size = pdesc->app_resident_code_size; + desc->code_entry_point = pdesc->app_imem_entry; + desc->data_dma_base = lower_32_bits(addr_data); + desc->data_size = pdesc->app_resident_data_size; +} + + +/** + * struct hsflcn_acr_desc - data section of the HS firmware + * + * This header is to be copied at the beginning of DMEM by the HS bootloader. + * + * @signature: signature of ACR ucode + * @wpr_region_id: region ID holding the WPR header and its details + * @wpr_offset: offset from the WPR region holding the wpr header + * @regions: region descriptors + * @nonwpr_ucode_blob_size: size of LS blob + * @nonwpr_ucode_blob_start: FB location of LS blob is + */ +struct hsflcn_acr_desc { + union { + u8 reserved_dmem[0x200]; + u32 signatures[4]; + } ucode_reserved_space; + u32 wpr_region_id; + u32 wpr_offset; + u32 mmu_mem_range; +#define FLCN_ACR_MAX_REGIONS 2 + struct { + u32 no_regions; + struct { + u32 start_addr; + u32 end_addr; + u32 region_id; + u32 read_mask; + u32 write_mask; + u32 client_mask; + } region_props[FLCN_ACR_MAX_REGIONS]; + } regions; + u32 ucode_blob_size; + u64 ucode_blob_base __aligned(8); + struct { + u32 vpr_enabled; + u32 vpr_start; + u32 vpr_end; + u32 hdcp_policies; + } vpr_desc; +}; + + +/* + * Low-secure blob creation + */ + +typedef int (*lsf_load_func)(const struct nvkm_subdev *, struct ls_ucode_img *); + +/** + * ls_ucode_img_load() - create a lsf_ucode_img and load it + */ +static struct ls_ucode_img * +ls_ucode_img_load(const struct nvkm_subdev *subdev, lsf_load_func load_func) +{ + struct ls_ucode_img *img; + int ret; + + img = kzalloc(sizeof(*img), GFP_KERNEL); + if (!img) + return ERR_PTR(-ENOMEM); + + ret = load_func(subdev, img); + + if (ret) { + kfree(img); + return ERR_PTR(ret); + } + + return img; +} + +#define LSF_LSB_HEADER_ALIGN 256 +#define LSF_BL_DATA_ALIGN 256 +#define LSF_BL_DATA_SIZE_ALIGN 256 +#define LSF_BL_CODE_SIZE_ALIGN 256 +#define LSF_UCODE_DATA_ALIGN 4096 + +/** + * ls_ucode_img_fill_headers - fill the WPR and LSB headers of an image + * @acr: ACR to use + * @img: image to generate for + * @offset: offset in the WPR region where this image starts + * + * Allocate space in the WPR area from offset and write the WPR and LSB headers + * accordingly. + * + * Return: offset at the end of this image. + */ +static u32 +ls_ucode_img_fill_headers(struct acr_r352 *acr, struct ls_ucode_img *img, + u32 offset) +{ + struct lsf_wpr_header *whdr = &img->wpr_header; + struct lsf_lsb_header *lhdr = &img->lsb_header; + struct ls_ucode_img_desc *desc = &img->ucode_desc; + const struct acr_r352_ls_func *func + acr->func->ls_func[img->falcon_id]; + + if (img->ucode_header) { + nvkm_fatal(acr->base.subdev, + "images withough loader are not supported yet!\n"); + return offset; + } + + /* Fill WPR header */ + whdr->falcon_id = img->falcon_id; + whdr->bootstrap_owner = acr->base.boot_falcon; + whdr->status = LSF_IMAGE_STATUS_COPY; + + /* Align, save off, and include an LSB header size */ + offset = ALIGN(offset, LSF_LSB_HEADER_ALIGN); + whdr->lsb_offset = offset; + offset += sizeof(struct lsf_lsb_header); + + /* + * Align, save off, and include the original (static) ucode + * image size + */ + offset = ALIGN(offset, LSF_UCODE_DATA_ALIGN); + lhdr->ucode_off = offset; + offset += img->ucode_size; + + /* + * For falcons that use a boot loader (BL), we append a loader + * desc structure on the end of the ucode image and consider + * this the boot loader data. The host will then copy the loader + * desc args to this space within the WPR region (before locking + * down) and the HS bin will then copy them to DMEM 0 for the + * loader. + */ + lhdr->bl_code_size = ALIGN(desc->bootloader_size, + LSF_BL_CODE_SIZE_ALIGN); + lhdr->ucode_size = ALIGN(desc->app_resident_data_offset, + LSF_BL_CODE_SIZE_ALIGN) + lhdr->bl_code_size; + lhdr->data_size = ALIGN(desc->app_size, LSF_BL_CODE_SIZE_ALIGN) + + lhdr->bl_code_size - lhdr->ucode_size; + /* + * Though the BL is located at 0th offset of the image, the VA + * is different to make sure that it doesn't collide the actual + * OS VA range + */ + lhdr->bl_imem_off = desc->bootloader_imem_offset; + lhdr->app_code_off = desc->app_start_offset + + desc->app_resident_code_offset; + lhdr->app_code_size = desc->app_resident_code_size; + lhdr->app_data_off = desc->app_start_offset + + desc->app_resident_data_offset; + lhdr->app_data_size = desc->app_resident_data_size; + + lhdr->flags = 0; + if (img->falcon_id == acr->base.boot_falcon) + lhdr->flags = LSF_FLAG_DMACTL_REQ_CTX; + + /* GPCCS will be loaded using PRI */ + if (img->falcon_id == NVKM_FALCON_GPCCS) + lhdr->flags |= LSF_FLAG_FORCE_PRIV_LOAD; + + /* Align and save off BL descriptor size */ + lhdr->bl_data_size = ALIGN(func->bl_desc_size, LSF_BL_DATA_SIZE_ALIGN); + + /* + * Align, save off, and include the additional BL data + */ + offset = ALIGN(offset, LSF_BL_DATA_ALIGN); + lhdr->bl_data_off = offset; + offset += lhdr->bl_data_size; + + return offset; +} + +/** + * struct ls_ucode_mgr - manager for all LS falcon firmwares + * @count: number of managed LS falcons + * @wpr_size: size of the required WPR region in bytes + * @img_list: linked list of lsf_ucode_img + */ +struct ls_ucode_mgr { + u16 count; + u32 wpr_size; + struct list_head img_list; +}; + +static void +ls_ucode_mgr_init(struct ls_ucode_mgr *mgr) +{ + memset(mgr, 0, sizeof(*mgr)); + INIT_LIST_HEAD(&mgr->img_list); +} + +static void +ls_ucode_mgr_cleanup(struct ls_ucode_mgr *mgr) +{ + struct ls_ucode_img *img, *t; + + list_for_each_entry_safe(img, t, &mgr->img_list, node) { + kfree(img->ucode_data); + kfree(img->ucode_header); + kfree(img); + } +} + +static void +ls_ucode_mgr_add_img(struct ls_ucode_mgr *mgr, struct ls_ucode_img *img) +{ + mgr->count++; + list_add_tail(&img->node, &mgr->img_list); +} + +/** + * ls_ucode_mgr_fill_headers - fill WPR and LSB headers of all managed images + */ +static void +ls_ucode_mgr_fill_headers(struct acr_r352 *acr, struct ls_ucode_mgr *mgr) +{ + struct ls_ucode_img *img; + u32 offset; + + /* + * Start with an array of WPR headers at the base of the WPR. + * The expectation here is that the secure falcon will do a single DMA + * read of this array and cache it internally so it's ok to pack these. + * Also, we add 1 to the falcon count to indicate the end of the array. + */ + offset = sizeof(struct lsf_wpr_header) * (mgr->count + 1); + + /* + * Walk the managed falcons, accounting for the LSB structs + * as well as the ucode images. + */ + list_for_each_entry(img, &mgr->img_list, node) { + offset = ls_ucode_img_fill_headers(acr, img, offset); + } + + mgr->wpr_size = offset; +} + +/** + * ls_ucode_mgr_write_wpr - write the WPR blob contents + */ +static int +ls_ucode_mgr_write_wpr(struct acr_r352 *acr, struct ls_ucode_mgr *mgr, + struct nvkm_gpuobj *wpr_blob, u32 wpr_addr) +{ + struct ls_ucode_img *img; + u32 pos = 0; + + nvkm_kmap(wpr_blob); + + list_for_each_entry(img, &mgr->img_list, node) { + nvkm_gpuobj_memcpy_to(wpr_blob, pos, &img->wpr_header, + sizeof(img->wpr_header)); + + nvkm_gpuobj_memcpy_to(wpr_blob, img->wpr_header.lsb_offset, + &img->lsb_header, sizeof(img->lsb_header)); + + /* Generate and write BL descriptor */ + if (!img->ucode_header) { + const struct acr_r352_ls_func *ls_func + acr->func->ls_func[img->falcon_id]; + u8 gdesc[ls_func->bl_desc_size]; + + ls_func->generate_bl_desc(&acr->base, img, wpr_addr, + gdesc); + + nvkm_gpuobj_memcpy_to(wpr_blob, + img->lsb_header.bl_data_off, + gdesc, ls_func->bl_desc_size); + } + + /* Copy ucode */ + nvkm_gpuobj_memcpy_to(wpr_blob, img->lsb_header.ucode_off, + img->ucode_data, img->ucode_size); + + pos += sizeof(img->wpr_header); + } + + nvkm_wo32(wpr_blob, pos, NVKM_FALCON_INVALID); + + nvkm_done(wpr_blob); + + return 0; +} + +/* Both size and address of WPR need to be 128K-aligned */ +#define WPR_ALIGNMENT 0x20000 +/** + * acr_r352_prepare_ls_blob() - prepare the LS blob + * + * For each securely managed falcon, load the FW, signatures and bootloaders and + * prepare a ucode blob. Then, compute the offsets in the WPR region for each + * blob, and finally write the headers and ucode blobs into a GPU object that + * will be copied into the WPR region by the HS firmware. + */ +static int +acr_r352_prepare_ls_blob(struct acr_r352 *acr, u64 wpr_addr, u32 wpr_size) +{ + const struct nvkm_subdev *subdev = acr->base.subdev; + struct ls_ucode_mgr mgr; + unsigned long managed_falcons = acr->base.managed_falcons; + int falcon_id; + int ret; + + ls_ucode_mgr_init(&mgr); + + /* Load all LS blobs */ + for_each_set_bit(falcon_id, &managed_falcons, NVKM_FALCON_END) { + struct ls_ucode_img *img; + + img = ls_ucode_img_load(subdev, + acr->func->ls_func[falcon_id]->load); + + if (IS_ERR(img)) { + ret = PTR_ERR(img); + goto cleanup; + } + ls_ucode_mgr_add_img(&mgr, img); + } + + /* + * Fill the WPR and LSF headers with the right offsets and compute + * required WPR size + */ + ls_ucode_mgr_fill_headers(acr, &mgr); + mgr.wpr_size = ALIGN(mgr.wpr_size, WPR_ALIGNMENT); + + /* Allocate GPU object that will contain the WPR region */ + ret = nvkm_gpuobj_new(subdev->device, mgr.wpr_size, WPR_ALIGNMENT, + false, NULL, &acr->ls_blob); + if (ret) + goto cleanup; + + nvkm_debug(subdev, "%d managed LS falcons, WPR size is %d bytes\n", + mgr.count, mgr.wpr_size); + + /* If WPR address and size are not fixed, set them to fit the LS blob */ + if (wpr_size == 0) { + wpr_addr = acr->ls_blob->addr; + wpr_size = mgr.wpr_size; + /* + * But if the WPR region is set by the bootloader, it is illegal for + * the HS blob to be larger than this region. + */ + } else if (mgr.wpr_size > wpr_size) { + nvkm_error(subdev, "WPR region too small for FW blob!\n"); + nvkm_error(subdev, "required: %dB\n", mgr.wpr_size); + nvkm_error(subdev, "available: %dB\n", wpr_size); + ret = -ENOSPC; + goto cleanup; + } + + /* Write LS blob */ + ret = ls_ucode_mgr_write_wpr(acr, &mgr, acr->ls_blob, wpr_addr); + if (ret) + nvkm_gpuobj_del(&acr->ls_blob); + +cleanup: + ls_ucode_mgr_cleanup(&mgr); + + return ret; +} + + + + +/** + * acr_r352_hsf_patch_signature() - patch HS blob with correct signature + */ +static void +acr_r352_hsf_patch_signature(struct nvkm_secboot *sb, void *acr_image) +{ + struct fw_bin_header *hsbin_hdr = acr_image; + struct hsf_fw_header *fw_hdr = acr_image + hsbin_hdr->header_offset; + void *hs_data = acr_image + hsbin_hdr->data_offset; + void *sig; + u32 sig_size; + + /* Falcon in debug or production mode? */ + if (sb->debug_mode) { + sig = acr_image + fw_hdr->sig_dbg_offset; + sig_size = fw_hdr->sig_dbg_size; + } else { + sig = acr_image + fw_hdr->sig_prod_offset; + sig_size = fw_hdr->sig_prod_size; + } + + /* Patch signature */ + memcpy(hs_data + fw_hdr->patch_loc, sig + fw_hdr->patch_sig, sig_size); +} + +static void +acr_r352_fixup_hs_desc(struct acr_r352 *acr, struct nvkm_secboot *sb, + struct hsflcn_acr_desc *desc) +{ + struct nvkm_gpuobj *ls_blob = acr->ls_blob; + + desc->ucode_blob_base = ls_blob->addr; + desc->ucode_blob_size = ls_blob->size; + + desc->wpr_offset = 0; + + /* WPR region information if WPR is not fixed */ + if (sb->wpr_size == 0) { + desc->wpr_region_id = 1; + desc->regions.no_regions = 1; + desc->regions.region_props[0].region_id = 1; + desc->regions.region_props[0].start_addr = ls_blob->addr >> 8; + desc->regions.region_props[0].end_addr + (ls_blob->addr + ls_blob->size) >> 8; + } +} + +static void +acr_r352_generate_hs_bl_desc(const struct hsf_load_header *hdr, void *_bl_desc, + u64 offset) +{ + struct acr_r352_flcn_bl_desc *bl_desc = _bl_desc; + u64 addr_code, addr_data; + + memset(bl_desc, 0, sizeof(*bl_desc)); + addr_code = offset >> 8; + addr_data = (offset + hdr->data_dma_base) >> 8; + + bl_desc->ctx_dma = FALCON_DMAIDX_VIRT; + bl_desc->code_dma_base = lower_32_bits(addr_code); + bl_desc->non_sec_code_off = hdr->non_sec_code_off; + bl_desc->non_sec_code_size = hdr->non_sec_code_size; + bl_desc->sec_code_off = hdr->app[0].sec_code_off; + bl_desc->sec_code_size = hdr->app[0].sec_code_size; + bl_desc->code_entry_point = 0; + bl_desc->data_dma_base = lower_32_bits(addr_data); + bl_desc->data_size = hdr->data_size; +} + +/** + * acr_r352_prepare_hs_blob - load and prepare a HS blob and BL descriptor + * + * @sb secure boot instance to prepare for + * @fw name of the HS firmware to load + * @blob pointer to gpuobj that will be allocated to receive the HS FW payload + * @bl_desc pointer to the BL descriptor to write for this firmware + * @patch whether we should patch the HS descriptor (only for HS loaders) + */ +static int +acr_r352_prepare_hs_blob(struct acr_r352 *acr, struct nvkm_secboot *sb, + const char *fw, struct nvkm_gpuobj **blob, + struct hsf_load_header *load_header, bool patch) +{ + struct nvkm_subdev *subdev = &sb->subdev; + void *acr_image; + struct fw_bin_header *hsbin_hdr; + struct hsf_fw_header *fw_hdr; + struct hsf_load_header *load_hdr; + void *acr_data; + int ret; + + acr_image = nvkm_acr_load_firmware(subdev, fw, 0); + if (IS_ERR(acr_image)) + return PTR_ERR(acr_image); + + hsbin_hdr = acr_image; + fw_hdr = acr_image + hsbin_hdr->header_offset; + load_hdr = acr_image + fw_hdr->hdr_offset; + acr_data = acr_image + hsbin_hdr->data_offset; + + /* Patch signature */ + acr_r352_hsf_patch_signature(sb, acr_image); + + /* Patch descriptor with WPR information? */ + if (patch) { + struct hsflcn_acr_desc *desc; + + desc = acr_data + load_hdr->data_dma_base; + acr_r352_fixup_hs_desc(acr, sb, desc); + } + + if (load_hdr->num_apps > ACR_R352_MAX_APPS) { + nvkm_error(subdev, "more apps (%d) than supported (%d)!", + load_hdr->num_apps, ACR_R352_MAX_APPS); + ret = -EINVAL; + goto cleanup; + } + memcpy(load_header, load_hdr, sizeof(*load_header) + + (sizeof(load_hdr->app[0]) * load_hdr->num_apps)); + + /* Create ACR blob and copy HS data to it */ + ret = nvkm_gpuobj_new(subdev->device, ALIGN(hsbin_hdr->data_size, 256), + 0x1000, false, NULL, blob); + if (ret) + goto cleanup; + + nvkm_kmap(*blob); + nvkm_gpuobj_memcpy_to(*blob, 0, acr_data, hsbin_hdr->data_size); + nvkm_done(*blob); + +cleanup: + kfree(acr_image); + + return ret; +} + +static int +acr_r352_prepare_hsbl_blob(struct acr_r352 *acr) +{ + const struct nvkm_subdev *subdev = acr->base.subdev; + struct fw_bin_header *hdr; + struct fw_bl_desc *hsbl_desc; + + acr->hsbl_blob = nvkm_acr_load_firmware(subdev, "acr/bl", 0); + if (IS_ERR(acr->hsbl_blob)) { + int ret = PTR_ERR(acr->hsbl_blob); + + acr->hsbl_blob = NULL; + return ret; + } + + hdr = acr->hsbl_blob; + hsbl_desc = acr->hsbl_blob + hdr->header_offset; + + /* virtual start address for boot vector */ + acr->base.start_address = hsbl_desc->start_tag << 8; + + return 0; +} + +/** + * acr_r352_load_blobs - load blobs common to all ACR V1 versions. + * + * This includes the LS blob, HS ucode loading blob, and HS bootloader. + * + * The HS ucode unload blob is only used on dGPU if the WPR region is variable. + */ +int +acr_r352_load_blobs(struct acr_r352 *acr, struct nvkm_secboot *sb) +{ + int ret; + + /* Firmware already loaded? */ + if (acr->firmware_ok) + return 0; + + /* Load and prepare the managed falcon's firmwares */ + ret = acr_r352_prepare_ls_blob(acr, sb->wpr_addr, sb->wpr_size); + if (ret) + return ret; + + /* Load the HS firmware that will load the LS firmwares */ + if (!acr->load_blob) { + ret = acr_r352_prepare_hs_blob(acr, sb, "acr/ucode_load", + &acr->load_blob, + &acr->load_bl_header, true); + if (ret) + return ret; + } + + /* If the ACR region is dynamically programmed, we need an unload FW */ + if (sb->wpr_size == 0) { + ret = acr_r352_prepare_hs_blob(acr, sb, "acr/ucode_unload", + &acr->unload_blob, + &acr->unload_bl_header, false); + if (ret) + return ret; + } + + /* Load the HS firmware bootloader */ + if (!acr->hsbl_blob) { + ret = acr_r352_prepare_hsbl_blob(acr); + if (ret) + return ret; + } + + acr->firmware_ok = true; + nvkm_debug(&sb->subdev, "LS blob successfully created\n"); + + return 0; +} + +/** + * acr_r352_load() - prepare HS falcon to run the specified blob, mapped + * at GPU address offset. + */ +static int +acr_r352_load(struct nvkm_acr *_acr, struct nvkm_secboot *sb, + struct nvkm_gpuobj *blob, u64 offset) +{ + struct acr_r352 *acr = acr_r352(_acr); + struct nvkm_device *device = sb->subdev.device; + struct fw_bin_header *hdr = acr->hsbl_blob; + struct fw_bl_desc *hsbl_desc = acr->hsbl_blob + hdr->header_offset; + void *blob_data = acr->hsbl_blob + hdr->data_offset; + void *hsbl_code = blob_data + hsbl_desc->code_off; + void *hsbl_data = blob_data + hsbl_desc->data_off; + u32 code_size = ALIGN(hsbl_desc->code_size, 256); + const struct hsf_load_header *load_hdr; + const u32 base = sb->base; + const u32 bl_desc_size = acr->func->hs_bl_desc_size; + u8 bl_desc[bl_desc_size]; + u32 code_start; + + /* Find the bootloader descriptor for our blob and copy it */ + if (blob == acr->load_blob) { + load_hdr = &acr->load_bl_header; + } else if (blob == acr->unload_blob) { + load_hdr = &acr->unload_bl_header; + } else { + nvkm_error(_acr->subdev, "invalid secure boot blob!\n"); + return -EINVAL; + } + + /* + * Copy HS bootloader data + */ + nvkm_falcon_load_dmem(device, sb->base, hsbl_data, 0x00000, + hsbl_desc->data_size); + + /* Copy HS bootloader code to end of IMEM */ + code_start = (nvkm_rd32(device, base + 0x108) & 0x1ff) << 8; + code_start -= code_size; + nvkm_falcon_load_imem(device, sb->base, hsbl_code, code_start, + code_size, hsbl_desc->start_tag); + + /* Generate the BL header */ + acr->func->generate_hs_bl_desc(load_hdr, bl_desc, offset); + + /* + * Copy HS BL header where the HS descriptor expects it to be + */ + nvkm_falcon_load_dmem(device, base, &bl_desc, hsbl_desc->dmem_load_off, + bl_desc_size); + + return 0; +} + +/* + * acr_r352_reset() - execute secure boot from the prepared state + * + * Load the HS bootloader and ask the falcon to run it. This will in turn + * load the HS firmware and run it, so once the falcon stops all the managed + * falcons should have their LS firmware loaded and be ready to run. + */ +static int +acr_r352_reset(struct nvkm_acr *_acr, struct nvkm_secboot *sb, + enum nvkm_falconidx falcon) +{ + struct acr_r352 *acr = acr_r352(_acr); + int ret; + + /* Make sure all blobs are ready */ + ret = acr_r352_load_blobs(acr, sb); + if (ret) + return ret; + + /* + * Dummy GM200 implementation: perform secure boot each time we are + * called on FECS. Since only FECS and GPCCS are managed and started + * together, this ought to be safe. + * + * Once we have proper PMU firmware and support, this will be changed + * to a proper call to the PMU method. + */ + if (falcon != NVKM_FALCON_FECS) + goto end; + + /* If WPR is set and we have an unload blob, run it to unlock WPR */ + if (acr->unload_blob && + acr->falcon_state[NVKM_FALCON_FECS] != NON_SECURE) { + ret = sb->func->run_blob(sb, acr->unload_blob); + if (ret) + return ret; + } + + /* Reload all managed falcons */ + ret = sb->func->run_blob(sb, acr->load_blob); + if (ret) + return ret; + +end: + acr->falcon_state[falcon] = RESET; + return 0; +} + +static int +acr_r352_start(struct nvkm_acr *_acr, struct nvkm_secboot *sb, + enum nvkm_falconidx falcon) +{ + struct acr_r352 *acr = acr_r352(_acr); + const struct nvkm_subdev *subdev = &sb->subdev; + int base; + + switch (falcon) { + case NVKM_FALCON_FECS: + base = 0x409000; + break; + case NVKM_FALCON_GPCCS: + base = 0x41a000; + break; + default: + nvkm_error(subdev, "cannot start unhandled falcon!\n"); + return -EINVAL; + } + + nvkm_wr32(subdev->device, base + 0x130, 0x00000002); + acr->falcon_state[falcon] = RUNNING; + + return 0; +} + +static int +acr_r352_fini(struct nvkm_acr *_acr, struct nvkm_secboot *sb, bool suspend) +{ + struct acr_r352 *acr = acr_r352(_acr); + int ret = 0; + int i; + + /* Run the unload blob to unprotect the WPR region */ + if (acr->unload_blob && + acr->falcon_state[NVKM_FALCON_FECS] != NON_SECURE) + ret = sb->func->run_blob(sb, acr->unload_blob); + + for (i = 0; i < NVKM_FALCON_END; i++) + acr->falcon_state[i] = NON_SECURE; + + return ret; +} + +static void +acr_r352_dtor(struct nvkm_acr *_acr) +{ + struct acr_r352 *acr = acr_r352(_acr); + + nvkm_gpuobj_del(&acr->unload_blob); + + kfree(acr->hsbl_blob); + nvkm_gpuobj_del(&acr->load_blob); + nvkm_gpuobj_del(&acr->ls_blob); + + kfree(acr); +} + +const struct acr_r352_ls_func +acr_r352_ls_fecs_func = { + .load = acr_ls_ucode_load_fecs, + .generate_bl_desc = acr_r352_generate_flcn_bl_desc, + .bl_desc_size = sizeof(struct acr_r352_flcn_bl_desc), +}; + +const struct acr_r352_ls_func +acr_r352_ls_gpccs_func = { + .load = acr_ls_ucode_load_gpccs, + .generate_bl_desc = acr_r352_generate_flcn_bl_desc, + .bl_desc_size = sizeof(struct acr_r352_flcn_bl_desc), +}; + +const struct acr_r352_func +acr_r352_func = { + .generate_hs_bl_desc = acr_r352_generate_hs_bl_desc, + .hs_bl_desc_size = sizeof(struct acr_r352_flcn_bl_desc), + .ls_func = { + [NVKM_FALCON_FECS] = &acr_r352_ls_fecs_func, + [NVKM_FALCON_GPCCS] = &acr_r352_ls_gpccs_func, + }, +}; + +static const struct nvkm_acr_func +acr_r352_base_func = { + .dtor = acr_r352_dtor, + .fini = acr_r352_fini, + .load = acr_r352_load, + .reset = acr_r352_reset, + .start = acr_r352_start, +}; + +struct nvkm_acr * +acr_r352_new_(const struct acr_r352_func *func, enum nvkm_falconidx boot_falcon, + unsigned long managed_falcons) +{ + struct acr_r352 *acr; + + acr = kzalloc(sizeof(*acr), GFP_KERNEL); + if (!acr) + return ERR_PTR(-ENOMEM); + + acr->base.boot_falcon = boot_falcon; + acr->base.managed_falcons = managed_falcons; + acr->base.func = &acr_r352_base_func; + acr->func = func; + + return &acr->base; +} + +struct nvkm_acr * +acr_r352_new(unsigned long managed_falcons) +{ + return acr_r352_new_(&acr_r352_func, NVKM_FALCON_PMU, managed_falcons); +} diff --git a/drm/nouveau/nvkm/subdev/secboot/acr_r352.h b/drm/nouveau/nvkm/subdev/secboot/acr_r352.h new file mode 100644 index 000000000000..38ac2a73f585 --- /dev/null +++ b/drm/nouveau/nvkm/subdev/secboot/acr_r352.h @@ -0,0 +1,126 @@ +/* + * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + * DEALINGS IN THE SOFTWARE. + */ +#ifndef __NVKM_SECBOOT_ACR_R352_H__ +#define __NVKM_SECBOOT_ACR_R352_H__ + +#include "acr.h" + +struct ls_ucode_img; + +#define ACR_R352_MAX_APPS 8 + +struct hsf_load_header_app { + u32 sec_code_off; + u32 sec_code_size; +}; + +/** + * struct hsf_load_header - HS firmware load header + */ +struct hsf_load_header { + u32 non_sec_code_off; + u32 non_sec_code_size; + u32 data_dma_base; + u32 data_size; + u32 num_apps; + struct hsf_load_header_app app[0]; +}; + +/** + * struct acr_r352_ls_func - manages a single LS firmware + * + * @load: load the external firmware into a ls_ucode_img + * @generate_bl_desc: function called on a block of bl_desc_size to generate the + * proper bootloader descriptor for this LS firmware + * @bl_desc_size: size of the bootloader descriptor + */ +struct acr_r352_ls_func { + int (*load)(const struct nvkm_subdev *, struct ls_ucode_img *); + void (*generate_bl_desc)(const struct nvkm_acr *, + const struct ls_ucode_img *, u64, void *); + u32 bl_desc_size; +}; + +/** + * struct acr_r352_func - manages nuances between ACR versions + * + * @generate_hs_bl_desc: function called on a block of bl_desc_size to generate + * the proper HS bootloader descriptor + * @hs_bl_desc_size: size of the HS bootloader descriptor + */ +struct acr_r352_func { + void (*generate_hs_bl_desc)(const struct hsf_load_header *, void *, + u64); + u32 hs_bl_desc_size; + + const struct acr_r352_ls_func *ls_func[NVKM_FALCON_END]; +}; + +/** + * struct acr_r352 - ACR data for driver release 352 (and beyond) + */ +struct acr_r352 { + struct nvkm_acr base; + const struct acr_r352_func *func; + + /* + * HS FW - lock WPR region (dGPU only) and load LS FWs + * on Tegra the HS FW copies the LS blob into the fixed WPR instead + */ + struct nvkm_gpuobj *load_blob; + struct { + struct hsf_load_header load_bl_header; + struct hsf_load_header_app __load_apps[ACR_R352_MAX_APPS]; + }; + + /* HS FW - unlock WPR region (dGPU only) */ + struct nvkm_gpuobj *unload_blob; + struct { + struct hsf_load_header unload_bl_header; + struct hsf_load_header_app __unload_apps[ACR_R352_MAX_APPS]; + }; + + /* HS bootloader */ + void *hsbl_blob; + + /* LS FWs, to be loaded by the HS ACR */ + struct nvkm_gpuobj *ls_blob; + + /* Firmware already loaded? */ + bool firmware_ok; + + /* To keep track of the state of all managed falcons */ + enum { + /* In non-secure state, no firmware loaded, no privileges*/ + NON_SECURE = 0, + /* In low-secure mode and ready to be started */ + RESET, + /* In low-secure mode and running */ + RUNNING, + } falcon_state[NVKM_FALCON_END]; +}; +#define acr_r352(acr) container_of(acr, struct acr_r352, base) + +struct nvkm_acr *acr_r352_new_(const struct acr_r352_func *, + enum nvkm_falconidx, unsigned long); + +#endif diff --git a/drm/nouveau/nvkm/subdev/secboot/acr_r361.c b/drm/nouveau/nvkm/subdev/secboot/acr_r361.c new file mode 100644 index 000000000000..d2c01af50d2e --- /dev/null +++ b/drm/nouveau/nvkm/subdev/secboot/acr_r361.c @@ -0,0 +1,132 @@ +/* + * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + * DEALINGS IN THE SOFTWARE. + */ + +#include "acr_r352.h" +#include "ls_ucode.h" + +/** + * struct acr_r361_flcn_bl_desc - DMEM bootloader descriptor + * @signature: 16B signature for secure code. 0s if no secure code + * @ctx_dma: DMA context to be used by BL while loading code/data + * @code_dma_base: 256B-aligned Physical FB Address where code is located + * (falcon's $xcbase register) + * @non_sec_code_off: offset from code_dma_base where the non-secure code is + * located. The offset must be multiple of 256 to help perf + * @non_sec_code_size: the size of the nonSecure code part. + * @sec_code_off: offset from code_dma_base where the secure code is + * located. The offset must be multiple of 256 to help perf + * @sec_code_size: offset from code_dma_base where the secure code is + * located. The offset must be multiple of 256 to help perf + * @code_entry_point: code entry point which will be invoked by BL after + * code is loaded. + * @data_dma_base: 256B aligned Physical FB Address where data is located. + * (falcon's $xdbase register) + * @data_size: size of data block. Should be multiple of 256B + * + * Structure used by the bootloader to load the rest of the code. This has + * to be filled by host and copied into DMEM at offset provided in the + * hsflcn_bl_desc.bl_desc_dmem_load_off. + */ +struct acr_r361_flcn_bl_desc { + u32 reserved[4]; + u32 signature[4]; + u32 ctx_dma; + struct flcn_u64 code_dma_base; + u32 non_sec_code_off; + u32 non_sec_code_size; + u32 sec_code_off; + u32 sec_code_size; + u32 code_entry_point; + struct flcn_u64 data_dma_base; + u32 data_size; +}; + +static void +acr_r361_generate_flcn_bl_desc(const struct nvkm_acr *acr, + const struct ls_ucode_img *img, u64 wpr_addr, + void *_desc) +{ + struct acr_r361_flcn_bl_desc *desc = _desc; + const struct ls_ucode_img_desc *pdesc = &img->ucode_desc; + u64 base, addr_code, addr_data; + + base = wpr_addr + img->lsb_header.ucode_off + pdesc->app_start_offset; + addr_code = base + pdesc->app_resident_code_offset; + addr_data = base + pdesc->app_resident_data_offset; + + memset(desc, 0, sizeof(*desc)); + desc->ctx_dma = FALCON_DMAIDX_UCODE; + desc->code_dma_base = u64_to_flcn64(addr_code); + desc->non_sec_code_off = pdesc->app_resident_code_offset; + desc->non_sec_code_size = pdesc->app_resident_code_size; + desc->code_entry_point = pdesc->app_imem_entry; + desc->data_dma_base = u64_to_flcn64(addr_data); + desc->data_size = pdesc->app_resident_data_size; +} + +static void +acr_r361_generate_hs_bl_desc(const struct hsf_load_header *hdr, void *_bl_desc, + u64 offset) +{ + struct acr_r361_flcn_bl_desc *bl_desc = _bl_desc; + + memset(bl_desc, 0, sizeof(*bl_desc)); + bl_desc->ctx_dma = FALCON_DMAIDX_VIRT; + bl_desc->code_dma_base = u64_to_flcn64(offset); + bl_desc->non_sec_code_off = hdr->non_sec_code_off; + bl_desc->non_sec_code_size = hdr->non_sec_code_size; + bl_desc->sec_code_off = hdr->app[0].sec_code_off; + bl_desc->sec_code_size = hdr->app[0].sec_code_size; + bl_desc->code_entry_point = 0; + bl_desc->data_dma_base = u64_to_flcn64(offset + hdr->data_dma_base); + bl_desc->data_size = hdr->data_size; +} + +const struct acr_r352_ls_func +acr_r361_ls_fecs_func = { + .load = acr_ls_ucode_load_fecs, + .generate_bl_desc = acr_r361_generate_flcn_bl_desc, + .bl_desc_size = sizeof(struct acr_r361_flcn_bl_desc), +}; + +const struct acr_r352_ls_func +acr_r361_ls_gpccs_func = { + .load = acr_ls_ucode_load_gpccs, + .generate_bl_desc = acr_r361_generate_flcn_bl_desc, + .bl_desc_size = sizeof(struct acr_r361_flcn_bl_desc), +}; + +const struct acr_r352_func +acr_r361_func = { + .generate_hs_bl_desc = acr_r361_generate_hs_bl_desc, + .hs_bl_desc_size = sizeof(struct acr_r361_flcn_bl_desc), + .ls_func = { + [NVKM_FALCON_FECS] = &acr_r361_ls_fecs_func, + [NVKM_FALCON_GPCCS] = &acr_r361_ls_gpccs_func, + }, +}; + +struct nvkm_acr * +acr_r361_new(unsigned long managed_falcons) +{ + return acr_r352_new_(&acr_r361_func, NVKM_FALCON_PMU, managed_falcons); +} diff --git a/drm/nouveau/nvkm/subdev/secboot/base.c b/drm/nouveau/nvkm/subdev/secboot/base.c index ea36851358ea..b393ae8b8b12 100644 --- a/drm/nouveau/nvkm/subdev/secboot/base.c +++ b/drm/nouveau/nvkm/subdev/secboot/base.c @@ -19,7 +19,70 @@ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. */ + +/* + * Secure boot is the process by which NVIDIA-signed firmware is loaded into + * some of the falcons of a GPU. For production devices this is the only way + * for the firmware to access useful (but sensitive) registers. + * + * A Falcon microprocessor supporting advanced security modes can run in one of + * three modes: + * + * - Non-secure (NS). In this mode, functionality is similar to Falcon + * architectures before security modes were introduced (pre-Maxwell), but + * capability is restricted. In particular, certain registers may be + * inaccessible for reads and/or writes, and physical memory access may be + * disabled (on certain Falcon instances). This is the only possible mode that + * can be used if you don't have microcode cryptographically signed by NVIDIA. + * + * - Heavy Secure (HS). In this mode, the microprocessor is a black box - it's + * not possible to read or write any Falcon internal state or Falcon registers + * from outside the Falcon (for example, from the host system). The only way + * to enable this mode is by loading microcode that has been signed by NVIDIA. + * (The loading process involves tagging the IMEM block as secure, writing the + * signature into a Falcon register, and starting execution. The hardware will + * validate the signature, and if valid, grant HS privileges.) + * + * - Light Secure (LS). In this mode, the microprocessor has more privileges + * than NS but fewer than HS. Some of the microprocessor state is visible to + * host software to ease debugging. The only way to enable this mode is by HS + * microcode enabling LS mode. Some privileges available to HS mode are not + * available here. LS mode is introduced in GM20x. + * + * Secure boot consists in temporarily switching a HS-capable falcon (typically + * PMU) into HS mode in order to validate the LS firmwares of managed falcons, + * load them, and switch managed falcons into LS mode. Once secure boot + * completes, no falcon remains in HS mode. + * + * Secure boot requires a write-protected memory region (WPR) which can only be + * written by the secure falcon. On dGPU, the driver sets up the WPR region in + * video memory. On Tegra, it is set up by the bootloader and its location and + * size written into memory controller registers. + * + * The secure boot process takes place as follows: + * + * 1) A LS blob is constructed that contains all the LS firmwares we want to + * load, along with their signatures and bootloaders. + * + * 2) A HS blob (also called ACR) is created that contains the signed HS + * firmware in charge of loading the LS firmwares into their respective + * falcons. + * + * 3) The HS blob is loaded (via its own bootloader) and executed on the + * HS-capable falcon. It authenticates itself, switches the secure falcon to + * HS mode and setup the WPR region around the LS blob (dGPU) or copies the + * LS blob into the WPR region (Tegra). + * + * 4) The LS blob is now secure from all external tampering. The HS falcon + * checks the signatures of the LS firmwares and, if valid, switches the + * managed falcons to LS mode and makes them ready to run the LS firmware. + * + * 5) The managed falcons remain in LS mode and can be started. + * + */ + #include "priv.h" +#include "acr.h" #include <core/falcon.h> #include <subdev/mc.h> @@ -154,12 +217,12 @@ int nvkm_secboot_reset(struct nvkm_secboot *sb, u32 falcon) { /* Unmanaged falcon? */ - if (!(BIT(falcon) & sb->func->managed_falcons)) { + if (!(BIT(falcon) & sb->acr->managed_falcons)) { nvkm_error(&sb->subdev, "cannot reset unmanaged falcon!\n"); return -EINVAL; } - return sb->func->reset(sb, falcon); + return sb->acr->func->reset(sb->acr, sb, falcon); } /** @@ -169,24 +232,24 @@ int nvkm_secboot_start(struct nvkm_secboot *sb, u32 falcon) { /* Unmanaged falcon? */ - if (!(BIT(falcon) & sb->func->managed_falcons)) { + if (!(BIT(falcon) & sb->acr->managed_falcons)) { nvkm_error(&sb->subdev, "cannot start unmanaged falcon!\n"); return -EINVAL; } - return sb->func->start(sb, falcon); + return sb->acr->func->start(sb->acr, sb, falcon); } /** * nvkm_secboot_is_managed() - check whether a given falcon is securely-managed */ bool -nvkm_secboot_is_managed(struct nvkm_secboot *secboot, enum nvkm_falconidx fid) +nvkm_secboot_is_managed(struct nvkm_secboot *sb, enum nvkm_falconidx fid) { - if (!secboot) + if (!sb) return false; - return secboot->func->managed_falcons & BIT(fid); + return sb->acr->managed_falcons & BIT(fid); } static int @@ -239,17 +302,20 @@ nvkm_secboot = { }; int -nvkm_secboot_ctor(const struct nvkm_secboot_func *func, +nvkm_secboot_ctor(const struct nvkm_secboot_func *func, struct nvkm_acr *acr, struct nvkm_device *device, int index, struct nvkm_secboot *sb) { unsigned long id; + u32 val; nvkm_subdev_ctor(&nvkm_secboot, device, index, &sb->subdev); sb->func = func; + sb->acr = acr; + acr->subdev = &sb->subdev; /* setup the performing falcon's base address and masks */ - switch (func->boot_falcon) { + switch (acr->boot_falcon) { case NVKM_FALCON_PMU: sb->devidx = NVKM_SUBDEV_PMU; sb->base = 0x10a000; @@ -259,8 +325,18 @@ nvkm_secboot_ctor(const struct nvkm_secboot_func *func, return -EINVAL; }; + /* Is the falcon in debug mode? */ + val = nvkm_rd32(sb->subdev.device, sb->base + 0xc08); + sb->debug_mode = (val >> 20) & 0x1; + + val = nvkm_rd32(device, sb->base + 0x108); + + nvkm_debug(&sb->subdev, "using %s falcon in %s mode\n", + nvkm_falcon_name[acr->boot_falcon], + sb->debug_mode ? "debug" : "prod"); + nvkm_debug(&sb->subdev, "securely managed falcons:\n"); - for_each_set_bit(id, &sb->func->managed_falcons, NVKM_FALCON_END) + for_each_set_bit(id, &acr->managed_falcons, NVKM_FALCON_END) nvkm_debug(&sb->subdev, "- %s\n", nvkm_falcon_name[id]); return 0; diff --git a/drm/nouveau/nvkm/subdev/secboot/gm200.c b/drm/nouveau/nvkm/subdev/secboot/gm200.c index 3d4ae8324547..c88895f90db8 100644 --- a/drm/nouveau/nvkm/subdev/secboot/gm200.c +++ b/drm/nouveau/nvkm/subdev/secboot/gm200.c @@ -20,1030 +20,20 @@ * DEALINGS IN THE SOFTWARE. */ -/* - * Secure boot is the process by which NVIDIA-signed firmware is loaded into - * some of the falcons of a GPU. For production devices this is the only way - * for the firmware to access useful (but sensitive) registers. - * - * A Falcon microprocessor supporting advanced security modes can run in one of - * three modes: - * - * - Non-secure (NS). In this mode, functionality is similar to Falcon - * architectures before security modes were introduced (pre-Maxwell), but - * capability is restricted. In particular, certain registers may be - * inaccessible for reads and/or writes, and physical memory access may be - * disabled (on certain Falcon instances). This is the only possible mode that - * can be used if you don't have microcode cryptographically signed by NVIDIA. - * - * - Heavy Secure (HS). In this mode, the microprocessor is a black box - it's - * not possible to read or write any Falcon internal state or Falcon registers - * from outside the Falcon (for example, from the host system). The only way - * to enable this mode is by loading microcode that has been signed by NVIDIA. - * (The loading process involves tagging the IMEM block as secure, writing the - * signature into a Falcon register, and starting execution. The hardware will - * validate the signature, and if valid, grant HS privileges.) - * - * - Light Secure (LS). In this mode, the microprocessor has more privileges - * than NS but fewer than HS. Some of the microprocessor state is visible to - * host software to ease debugging. The only way to enable this mode is by HS - * microcode enabling LS mode. Some privileges available to HS mode are not - * available here. LS mode is introduced in GM20x. - * - * Secure boot consists in temporarily switching a HS-capable falcon (typically - * PMU) into HS mode in order to validate the LS firmwares of managed falcons, - * load them, and switch managed falcons into LS mode. Once secure boot - * completes, no falcon remains in HS mode. - * - * Secure boot requires a write-protected memory region (WPR) which can only be - * written by the secure falcon. On dGPU, the driver sets up the WPR region in - * video memory. On Tegra, it is set up by the bootloader and its location and - * size written into memory controller registers. - * - * The secure boot process takes place as follows: - * - * 1) A LS blob is constructed that contains all the LS firmwares we want to - * load, along with their signatures and bootloaders. - * - * 2) A HS blob (also called ACR) is created that contains the signed HS - * firmware in charge of loading the LS firmwares into their respective - * falcons. - * - * 3) The HS blob is loaded (via its own bootloader) and executed on the - * HS-capable falcon. It authenticates itself, switches the secure falcon to - * HS mode and setup the WPR region around the LS blob (dGPU) or copies the - * LS blob into the WPR region (Tegra). - * - * 4) The LS blob is now secure from all external tampering. The HS falcon - * checks the signatures of the LS firmwares and, if valid, switches the - * managed falcons to LS mode and makes them ready to run the LS firmware. - * - * 5) The managed falcons remain in LS mode and can be started. - * - */ -#include "priv.h" +#include "acr.h" +#include "gm200.h" #include <core/gpuobj.h> -#include <core/firmware.h> #include <subdev/fb.h> /** - * struct fw_bin_header - header of firmware files - * @bin_magic: always 0x3b1d14f0 - * @bin_ver: version of the bin format - * @bin_size: entire image size including this header - * @header_offset: offset of the firmware/bootloader header in the file - * @data_offset: offset of the firmware/bootloader payload in the file - * @data_size: size of the payload - * - * This header is located at the beginning of the HS firmware and HS bootloader - * files, to describe where the headers and data can be found. - */ -struct fw_bin_header { - u32 bin_magic; - u32 bin_ver; - u32 bin_size; - u32 header_offset; - u32 data_offset; - u32 data_size; -}; - -/** - * struct fw_bl_desc - firmware bootloader descriptor - * @start_tag: starting tag of bootloader - * @desc_dmem_load_off: DMEM offset of flcn_bl_dmem_desc - * @code_off: offset of code section - * @code_size: size of code section - * @data_off: offset of data section - * @data_size: size of data section - * - * This structure is embedded in bootloader firmware files at to describe the - * IMEM and DMEM layout expected by the bootloader. - */ -struct fw_bl_desc { - u32 start_tag; - u32 dmem_load_off; - u32 code_off; - u32 code_size; - u32 data_off; - u32 data_size; -}; - - -/** - * struct ls_ucode_mgr - manager for all LS falcon firmwares - * @count: number of managed LS falcons - * @wpr_size: size of the required WPR region in bytes - * @img_list: linked list of lsf_ucode_img - */ -struct ls_ucode_mgr { - u16 count; - u32 wpr_size; - struct list_head img_list; -}; - - -/* - * - * HS blob structures - * - */ - -/** - * struct hsf_fw_header - HS firmware descriptor - * @sig_dbg_offset: offset of the debug signature - * @sig_dbg_size: size of the debug signature - * @sig_prod_offset: offset of the production signature - * @sig_prod_size: size of the production signature - * @patch_loc: offset of the offset (sic) of where the signature is - * @patch_sig: offset of the offset (sic) to add to sig_*_offset - * @hdr_offset: offset of the load header (see struct hs_load_header) - * @hdr_size: size of above header - * - * This structure is embedded in the HS firmware image at - * hs_bin_hdr.header_offset. - */ -struct hsf_fw_header { - u32 sig_dbg_offset; - u32 sig_dbg_size; - u32 sig_prod_offset; - u32 sig_prod_size; - u32 patch_loc; - u32 patch_sig; - u32 hdr_offset; - u32 hdr_size; -}; - - -/** - * struct gm200_flcn_bl_desc - DMEM bootloader descriptor - * @signature: 16B signature for secure code. 0s if no secure code - * @ctx_dma: DMA context to be used by BL while loading code/data - * @code_dma_base: 256B-aligned Physical FB Address where code is located - * (falcon's $xcbase register) - * @non_sec_code_off: offset from code_dma_base where the non-secure code is - * located. The offset must be multiple of 256 to help perf - * @non_sec_code_size: the size of the nonSecure code part. - * @sec_code_off: offset from code_dma_base where the secure code is - * located. The offset must be multiple of 256 to help perf - * @sec_code_size: offset from code_dma_base where the secure code is - * located. The offset must be multiple of 256 to help perf - * @code_entry_point: code entry point which will be invoked by BL after - * code is loaded. - * @data_dma_base: 256B aligned Physical FB Address where data is located. - * (falcon's $xdbase register) - * @data_size: size of data block. Should be multiple of 256B - * - * Structure used by the bootloader to load the rest of the code. This has - * to be filled by host and copied into DMEM at offset provided in the - * hsflcn_bl_desc.bl_desc_dmem_load_off. - */ -struct gm200_flcn_bl_desc { - u32 reserved[4]; - u32 signature[4]; - u32 ctx_dma; - struct flcn_u64 code_dma_base; - u32 non_sec_code_off; - u32 non_sec_code_size; - u32 sec_code_off; - u32 sec_code_size; - u32 code_entry_point; - struct flcn_u64 data_dma_base; - u32 data_size; -}; - - -/** - * Convenience function to duplicate a firmware file in memory and check that - * it has the required minimum size. - */ -static void * -gm200_secboot_load_firmware(const struct nvkm_subdev *subdev, const char *name, - size_t min_size) -{ - const struct firmware *fw; - void *blob; - int ret; - - ret = nvkm_firmware_get(subdev->device, name, &fw); - if (ret) - return ERR_PTR(ret); - if (fw->size < min_size) { - nvkm_error(subdev, "%s is smaller than expected size %zu\n", - name, min_size); - nvkm_firmware_put(fw); - return ERR_PTR(-EINVAL); - } - blob = kmemdup(fw->data, fw->size, GFP_KERNEL); - nvkm_firmware_put(fw); - if (!blob) - return ERR_PTR(-ENOMEM); - - return blob; -} - - -/* - * Low-secure blob creation - */ - -#define BL_DESC_BLK_SIZE 256 -/** - * Build a ucode image and descriptor from provided bootloader, code and data. - * - * @bl: bootloader image, including 16-bytes descriptor - * @code: LS firmware code segment - * @data: LS firmware data segment - * @desc: ucode descriptor to be written - * - * Return: allocated ucode image with corresponding descriptor information. desc - * is also updated to contain the right offsets within returned image. - */ -static void * -ls_ucode_img_build(const struct firmware *bl, const struct firmware *code, - const struct firmware *data, struct ls_ucode_img_desc *desc) -{ - struct fw_bin_header *bin_hdr = (void *)bl->data; - struct fw_bl_desc *bl_desc = (void *)bl->data + bin_hdr->header_offset; - void *bl_data = (void *)bl->data + bin_hdr->data_offset; - u32 pos = 0; - void *image; - - desc->bootloader_start_offset = pos; - desc->bootloader_size = ALIGN(bl_desc->code_size, sizeof(u32)); - desc->bootloader_imem_offset = bl_desc->start_tag * 256; - desc->bootloader_entry_point = bl_desc->start_tag * 256; - - pos = ALIGN(pos + desc->bootloader_size, BL_DESC_BLK_SIZE); - desc->app_start_offset = pos; - desc->app_size = ALIGN(code->size, BL_DESC_BLK_SIZE) + - ALIGN(data->size, BL_DESC_BLK_SIZE); - desc->app_imem_offset = 0; - desc->app_imem_entry = 0; - desc->app_dmem_offset = 0; - desc->app_resident_code_offset = 0; - desc->app_resident_code_size = ALIGN(code->size, BL_DESC_BLK_SIZE); - - pos = ALIGN(pos + desc->app_resident_code_size, BL_DESC_BLK_SIZE); - desc->app_resident_data_offset = pos - desc->app_start_offset; - desc->app_resident_data_size = ALIGN(data->size, BL_DESC_BLK_SIZE); - - desc->image_size = ALIGN(bl_desc->code_size, BL_DESC_BLK_SIZE) + - desc->app_size; - - image = kzalloc(desc->image_size, GFP_KERNEL); - if (!image) - return ERR_PTR(-ENOMEM); - - memcpy(image + desc->bootloader_start_offset, bl_data, - bl_desc->code_size); - memcpy(image + desc->app_start_offset, code->data, code->size); - memcpy(image + desc->app_start_offset + desc->app_resident_data_offset, - data->data, data->size); - - return image; -} - -/** - * ls_ucode_img_load_generic() - load and prepare a LS ucode image - * - * Load the LS microcode, bootloader and signature and pack them into a single - * blob. Also generate the corresponding ucode descriptor. - */ -static int -ls_ucode_img_load_generic(const struct nvkm_subdev *subdev, - struct ls_ucode_img *img, const char *falcon_name, - const u32 falcon_id) -{ - const struct firmware *bl, *code, *data; - struct lsf_ucode_desc *lsf_desc; - char f[64]; - int ret; - - img->ucode_header = NULL; - - snprintf(f, sizeof(f), "gr/%s_bl", falcon_name); - ret = nvkm_firmware_get(subdev->device, f, &bl); - if (ret) - goto error; - - snprintf(f, sizeof(f), "gr/%s_inst", falcon_name); - ret = nvkm_firmware_get(subdev->device, f, &code); - if (ret) - goto free_bl; - - snprintf(f, sizeof(f), "gr/%s_data", falcon_name); - ret = nvkm_firmware_get(subdev->device, f, &data); - if (ret) - goto free_inst; - - img->ucode_data = ls_ucode_img_build(bl, code, data, - &img->ucode_desc); - if (IS_ERR(img->ucode_data)) { - ret = PTR_ERR(img->ucode_data); - goto free_data; - } - img->ucode_size = img->ucode_desc.image_size; - - snprintf(f, sizeof(f), "gr/%s_sig", falcon_name); - lsf_desc = gm200_secboot_load_firmware(subdev, f, sizeof(*lsf_desc)); - if (IS_ERR(lsf_desc)) { - ret = PTR_ERR(lsf_desc); - goto free_image; - } - /* not needed? the signature should already have the right value */ - lsf_desc->falcon_id = falcon_id; - memcpy(&img->lsb_header.signature, lsf_desc, sizeof(*lsf_desc)); - img->falcon_id = lsf_desc->falcon_id; - kfree(lsf_desc); - - /* success path - only free requested firmware files */ - goto free_data; - -free_image: - kfree(img->ucode_data); -free_data: - nvkm_firmware_put(data); -free_inst: - nvkm_firmware_put(code); -free_bl: - nvkm_firmware_put(bl); -error: - return ret; -} - -typedef int (*lsf_load_func)(const struct nvkm_subdev *, struct ls_ucode_img *); - -int -gm200_ls_load_fecs(const struct nvkm_subdev *subdev, struct ls_ucode_img *img) -{ - return ls_ucode_img_load_generic(subdev, img, "fecs", - NVKM_FALCON_FECS); -} - -int -gm200_ls_load_gpccs(const struct nvkm_subdev *subdev, struct ls_ucode_img *img) -{ - return ls_ucode_img_load_generic(subdev, img, "gpccs", - NVKM_FALCON_GPCCS); -} - -/** - * ls_ucode_img_load() - create a lsf_ucode_img and load it - */ -static struct ls_ucode_img * -ls_ucode_img_load(struct nvkm_subdev *subdev, lsf_load_func load_func) -{ - struct ls_ucode_img *img; - int ret; - - img = kzalloc(sizeof(*img), GFP_KERNEL); - if (!img) - return ERR_PTR(-ENOMEM); - - ret = load_func(subdev, img); - if (ret) { - kfree(img); - return ERR_PTR(ret); - } - - return img; -} - -/** - * gm200_secboot_ls_bl_desc() - populate a DMEM BL descriptor for LS image - * @img: ucode image to generate against - * @desc: descriptor to populate - * @sb: secure boot state to use for base addresses - * - * Populate the DMEM BL descriptor with the information contained in a - * ls_ucode_desc. - * - */ -static void -gm200_secboot_ls_bl_desc(const struct ls_ucode_img *img, u64 wpr_addr, - void *_desc) -{ - struct gm200_flcn_bl_desc *desc = _desc; - const struct ls_ucode_img_desc *pdesc = &img->ucode_desc; - u64 addr_base; - - addr_base = wpr_addr + img->lsb_header.ucode_off + - pdesc->app_start_offset; - - memset(desc, 0, sizeof(*desc)); - desc->ctx_dma = FALCON_DMAIDX_UCODE; - desc->code_dma_base.lo = lower_32_bits( - (addr_base + pdesc->app_resident_code_offset)); - desc->code_dma_base.hi = upper_32_bits( - (addr_base + pdesc->app_resident_code_offset)); - desc->non_sec_code_size = pdesc->app_resident_code_size; - desc->data_dma_base.lo = lower_32_bits( - (addr_base + pdesc->app_resident_data_offset)); - desc->data_dma_base.hi = upper_32_bits( - (addr_base + pdesc->app_resident_data_offset)); - desc->data_size = pdesc->app_resident_data_size; - desc->code_entry_point = pdesc->app_imem_entry; -} - -#define LSF_LSB_HEADER_ALIGN 256 -#define LSF_BL_DATA_ALIGN 256 -#define LSF_BL_DATA_SIZE_ALIGN 256 -#define LSF_BL_CODE_SIZE_ALIGN 256 -#define LSF_UCODE_DATA_ALIGN 4096 - -/** - * ls_ucode_img_fill_headers - fill the WPR and LSB headers of an image - * @gsb: secure boot device used - * @img: image to generate for - * @offset: offset in the WPR region where this image starts - * - * Allocate space in the WPR area from offset and write the WPR and LSB headers - * accordingly. - * - * Return: offset at the end of this image. - */ -static u32 -ls_ucode_img_fill_headers(struct gm200_secboot *gsb, struct ls_ucode_img *img, - u32 offset) -{ - struct lsf_wpr_header *whdr = &img->wpr_header; - struct lsf_lsb_header *lhdr = &img->lsb_header; - struct ls_ucode_img_desc *desc = &img->ucode_desc; - const struct secboot_ls_single_func *func - (*gsb->ls_func)[img->falcon_id]; - - if (img->ucode_header) { - nvkm_fatal(&gsb->base.subdev, - "images withough loader are not supported yet!\n"); - return offset; - } - - /* Fill WPR header */ - whdr->falcon_id = img->falcon_id; - whdr->bootstrap_owner = gsb->base.func->boot_falcon; - whdr->status = LSF_IMAGE_STATUS_COPY; - - /* Align, save off, and include an LSB header size */ - offset = ALIGN(offset, LSF_LSB_HEADER_ALIGN); - whdr->lsb_offset = offset; - offset += sizeof(struct lsf_lsb_header); - - /* - * Align, save off, and include the original (static) ucode - * image size - */ - offset = ALIGN(offset, LSF_UCODE_DATA_ALIGN); - lhdr->ucode_off = offset; - offset += img->ucode_size; - - /* - * For falcons that use a boot loader (BL), we append a loader - * desc structure on the end of the ucode image and consider - * this the boot loader data. The host will then copy the loader - * desc args to this space within the WPR region (before locking - * down) and the HS bin will then copy them to DMEM 0 for the - * loader. - */ - lhdr->bl_code_size = ALIGN(desc->bootloader_size, - LSF_BL_CODE_SIZE_ALIGN); - lhdr->ucode_size = ALIGN(desc->app_resident_data_offset, - LSF_BL_CODE_SIZE_ALIGN) + lhdr->bl_code_size; - lhdr->data_size = ALIGN(desc->app_size, LSF_BL_CODE_SIZE_ALIGN) + - lhdr->bl_code_size - lhdr->ucode_size; - /* - * Though the BL is located at 0th offset of the image, the VA - * is different to make sure that it doesn't collide the actual - * OS VA range - */ - lhdr->bl_imem_off = desc->bootloader_imem_offset; - lhdr->app_code_off = desc->app_start_offset + - desc->app_resident_code_offset; - lhdr->app_code_size = desc->app_resident_code_size; - lhdr->app_data_off = desc->app_start_offset + - desc->app_resident_data_offset; - lhdr->app_data_size = desc->app_resident_data_size; - - lhdr->flags = 0; - if (img->falcon_id == gsb->base.func->boot_falcon) - lhdr->flags = LSF_FLAG_DMACTL_REQ_CTX; - - /* GPCCS will be loaded using PRI */ - if (img->falcon_id == NVKM_FALCON_GPCCS) - lhdr->flags |= LSF_FLAG_FORCE_PRIV_LOAD; - - /* Align and save off BL descriptor size */ - lhdr->bl_data_size = ALIGN(func->bl_desc_size, LSF_BL_DATA_SIZE_ALIGN); - - /* - * Align, save off, and include the additional BL data - */ - offset = ALIGN(offset, LSF_BL_DATA_ALIGN); - lhdr->bl_data_off = offset; - offset += lhdr->bl_data_size; - - return offset; -} - -static void -ls_ucode_mgr_init(struct ls_ucode_mgr *mgr) -{ - memset(mgr, 0, sizeof(*mgr)); - INIT_LIST_HEAD(&mgr->img_list); -} - -static void -ls_ucode_mgr_cleanup(struct ls_ucode_mgr *mgr) -{ - struct ls_ucode_img *img, *t; - - list_for_each_entry_safe(img, t, &mgr->img_list, node) { - kfree(img->ucode_data); - kfree(img->ucode_header); - kfree(img); - } -} - -static void -ls_ucode_mgr_add_img(struct ls_ucode_mgr *mgr, struct ls_ucode_img *img) -{ - mgr->count++; - list_add_tail(&img->node, &mgr->img_list); -} - -/** - * ls_ucode_mgr_fill_headers - fill WPR and LSB headers of all managed images - */ -static void -ls_ucode_mgr_fill_headers(struct gm200_secboot *gsb, struct ls_ucode_mgr *mgr) -{ - struct ls_ucode_img *img; - u32 offset; - - /* - * Start with an array of WPR headers at the base of the WPR. - * The expectation here is that the secure falcon will do a single DMA - * read of this array and cache it internally so it's ok to pack these. - * Also, we add 1 to the falcon count to indicate the end of the array. - */ - offset = sizeof(struct lsf_wpr_header) * (mgr->count + 1); - - /* - * Walk the managed falcons, accounting for the LSB structs - * as well as the ucode images. - */ - list_for_each_entry(img, &mgr->img_list, node) { - offset = ls_ucode_img_fill_headers(gsb, img, offset); - } - - mgr->wpr_size = offset; -} - -/** - * ls_ucode_mgr_write_wpr - write the WPR blob contents - */ -static int -ls_ucode_mgr_write_wpr(struct gm200_secboot *gsb, struct ls_ucode_mgr *mgr, - struct nvkm_gpuobj *wpr_blob) -{ - struct ls_ucode_img *img; - u32 pos = 0; - - nvkm_kmap(wpr_blob); - - list_for_each_entry(img, &mgr->img_list, node) { - nvkm_gpuobj_memcpy_to(wpr_blob, pos, &img->wpr_header, - sizeof(img->wpr_header)); - - nvkm_gpuobj_memcpy_to(wpr_blob, img->wpr_header.lsb_offset, - &img->lsb_header, sizeof(img->lsb_header)); - - /* Generate and write BL descriptor */ - if (!img->ucode_header) { - const struct secboot_ls_single_func *ls_func - (*gsb->ls_func)[img->falcon_id]; - u8 gdesc[ls_func->bl_desc_size]; - - ls_func->generate_bl_desc(img, gsb->acr_wpr_addr, - &gdesc); - - nvkm_gpuobj_memcpy_to(wpr_blob, - img->lsb_header.bl_data_off, - &gdesc, ls_func->bl_desc_size); - } - - /* Copy ucode */ - nvkm_gpuobj_memcpy_to(wpr_blob, img->lsb_header.ucode_off, - img->ucode_data, img->ucode_size); - - pos += sizeof(img->wpr_header); - } - - nvkm_wo32(wpr_blob, pos, NVKM_FALCON_INVALID); - - nvkm_done(wpr_blob); - - return 0; -} - -/* Both size and address of WPR need to be 128K-aligned */ -#define WPR_ALIGNMENT 0x20000 -/** - * gm200_secboot_prepare_ls_blob() - prepare the LS blob - * - * For each securely managed falcon, load the FW, signatures and bootloaders and - * prepare a ucode blob. Then, compute the offsets in the WPR region for each - * blob, and finally write the headers and ucode blobs into a GPU object that - * will be copied into the WPR region by the HS firmware. - */ -static int -gm200_secboot_prepare_ls_blob(struct gm200_secboot *gsb) -{ - struct nvkm_secboot *sb = &gsb->base; - struct nvkm_device *device = sb->subdev.device; - struct ls_ucode_mgr mgr; - int falcon_id; - int ret; - - ls_ucode_mgr_init(&mgr); - - /* Load all LS blobs */ - for_each_set_bit(falcon_id, &sb->func->managed_falcons, - NVKM_FALCON_END) { - struct ls_ucode_img *img; - - img = ls_ucode_img_load(&sb->subdev, - (*gsb->ls_func)[falcon_id]->load); - - if (IS_ERR(img)) { - ret = PTR_ERR(img); - goto cleanup; - } - ls_ucode_mgr_add_img(&mgr, img); - } - - /* - * Fill the WPR and LSF headers with the right offsets and compute - * required WPR size - */ - ls_ucode_mgr_fill_headers(gsb, &mgr); - mgr.wpr_size = ALIGN(mgr.wpr_size, WPR_ALIGNMENT); - - /* Allocate GPU object that will contain the WPR region */ - ret = nvkm_gpuobj_new(device, mgr.wpr_size, WPR_ALIGNMENT, false, NULL, - &gsb->ls_blob); - if (ret) - goto cleanup; - - nvkm_debug(&sb->subdev, "%d managed LS falcons, WPR size is %d bytes\n", - mgr.count, mgr.wpr_size); - - /* If WPR address and size are not fixed, set them to fit the LS blob */ - if (!gsb->wpr_size) { - gsb->acr_wpr_addr = gsb->ls_blob->addr; - gsb->acr_wpr_size = gsb->ls_blob->size; - } else { - gsb->acr_wpr_addr = gsb->wpr_addr; - gsb->acr_wpr_size = gsb->wpr_size; - } - - /* Write LS blob */ - ret = ls_ucode_mgr_write_wpr(gsb, &mgr, gsb->ls_blob); - if (ret) - nvkm_gpuobj_del(&gsb->ls_blob); - -cleanup: - ls_ucode_mgr_cleanup(&mgr); - - return ret; -} - -static const secboot_ls_func -gm200_ls_func = { - [NVKM_FALCON_FECS] = &(struct secboot_ls_single_func) { - .load = gm200_ls_load_fecs, - .generate_bl_desc = gm200_secboot_ls_bl_desc, - .bl_desc_size = sizeof(struct gm200_flcn_bl_desc), - }, - [NVKM_FALCON_GPCCS] = &(struct secboot_ls_single_func) { - .load = gm200_ls_load_gpccs, - .generate_bl_desc = gm200_secboot_ls_bl_desc, - .bl_desc_size = sizeof(struct gm200_flcn_bl_desc), - }, -}; - -/* - * High-secure blob creation - */ - -/** - * gm200_secboot_hsf_patch_signature() - patch HS blob with correct signature - */ -static void -gm200_secboot_hsf_patch_signature(struct gm200_secboot *gsb, void *acr_image) -{ - struct nvkm_secboot *sb = &gsb->base; - struct fw_bin_header *hsbin_hdr = acr_image; - struct hsf_fw_header *fw_hdr = acr_image + hsbin_hdr->header_offset; - void *hs_data = acr_image + hsbin_hdr->data_offset; - void *sig; - u32 sig_size; - - /* Falcon in debug or production mode? */ - if ((nvkm_rd32(sb->subdev.device, sb->base + 0xc08) >> 20) & 0x1) { - sig = acr_image + fw_hdr->sig_dbg_offset; - sig_size = fw_hdr->sig_dbg_size; - } else { - sig = acr_image + fw_hdr->sig_prod_offset; - sig_size = fw_hdr->sig_prod_size; - } - - /* Patch signature */ - memcpy(hs_data + fw_hdr->patch_loc, sig + fw_hdr->patch_sig, sig_size); -} - -/** - * struct hsflcn_acr_desc - data section of the HS firmware - * - * This header is to be copied at the beginning of DMEM by the HS bootloader. - * - * @signature: signature of ACR ucode - * @wpr_region_id: region ID holding the WPR header and its details - * @wpr_offset: offset from the WPR region holding the wpr header - * @regions: region descriptors - * @nonwpr_ucode_blob_size: size of LS blob - * @nonwpr_ucode_blob_start: FB location of LS blob is - */ -struct hsflcn_acr_desc { - union { - u8 reserved_dmem[0x200]; - u32 signatures[4]; - } ucode_reserved_space; - u32 wpr_region_id; - u32 wpr_offset; - u32 mmu_mem_range; -#define FLCN_ACR_MAX_REGIONS 2 - struct { - u32 no_regions; - struct { - u32 start_addr; - u32 end_addr; - u32 region_id; - u32 read_mask; - u32 write_mask; - u32 client_mask; - } region_props[FLCN_ACR_MAX_REGIONS]; - } regions; - u32 ucode_blob_size; - u64 ucode_blob_base __aligned(8); - struct { - u32 vpr_enabled; - u32 vpr_start; - u32 vpr_end; - u32 hdcp_policies; - } vpr_desc; -}; - -static void -gm200_secboot_fixup_hs_desc(struct gm200_secboot *gsb, - struct hsflcn_acr_desc *desc) -{ - desc->ucode_blob_base = gsb->ls_blob->addr; - desc->ucode_blob_size = gsb->ls_blob->size; - - desc->wpr_offset = 0; - - /* WPR region information if WPR is not fixed */ - if (gsb->wpr_size == 0) { - desc->wpr_region_id = 1; - desc->regions.no_regions = 1; - desc->regions.region_props[0].region_id = 1; - desc->regions.region_props[0].start_addr - gsb->acr_wpr_addr >> 8; - desc->regions.region_props[0].end_addr - (gsb->acr_wpr_addr + gsb->acr_wpr_size) >> 8; - } -} - -/** - * gm200_secboot_prepare_hs_blob - load and prepare a HS blob and BL descriptor - * - * @gsb secure boot instance to prepare for - * @fw name of the HS firmware to load - * @blob pointer to gpuobj that will be allocated to receive the HS FW payload - * @bl_desc pointer to the BL descriptor to write for this firmware - * @patch whether we should patch the HS descriptor (only for HS loaders) - */ -static int -gm200_secboot_prepare_hs_blob(struct gm200_secboot *gsb, const char *fw, - struct nvkm_gpuobj **blob, - struct hsf_load_header *load_header, bool patch) -{ - struct nvkm_subdev *subdev = &gsb->base.subdev; - void *acr_image; - struct fw_bin_header *hsbin_hdr; - struct hsf_fw_header *fw_hdr; - struct hsf_load_header *load_hdr; - void *acr_data; - int ret; - - acr_image = gm200_secboot_load_firmware(subdev, fw, 0); - if (IS_ERR(acr_image)) - return PTR_ERR(acr_image); - - hsbin_hdr = acr_image; - fw_hdr = acr_image + hsbin_hdr->header_offset; - load_hdr = acr_image + fw_hdr->hdr_offset; - acr_data = acr_image + hsbin_hdr->data_offset; - - /* Patch signature */ - gm200_secboot_hsf_patch_signature(gsb, acr_image); - - /* Patch descriptor with WPR information? */ - if (patch) { - struct hsflcn_acr_desc *desc; - - desc = acr_data + load_hdr->data_dma_base; - gm200_secboot_fixup_hs_desc(gsb, desc); - } - - if (load_hdr->num_apps > GM200_ACR_MAX_APPS) { - nvkm_error(subdev, "more apps (%d) than supported (%d)!", - load_hdr->num_apps, GM200_ACR_MAX_APPS); - ret = -EINVAL; - goto cleanup; - } - memcpy(load_header, load_hdr, sizeof(*load_header) + - (sizeof(load_hdr->app[0]) * load_hdr->num_apps)); - - /* Create ACR blob and copy HS data to it */ - ret = nvkm_gpuobj_new(subdev->device, ALIGN(hsbin_hdr->data_size, 256), - 0x1000, false, NULL, blob); - if (ret) - goto cleanup; - - nvkm_kmap(*blob); - nvkm_gpuobj_memcpy_to(*blob, 0, acr_data, hsbin_hdr->data_size); - nvkm_done(*blob); - -cleanup: - kfree(acr_image); - - return ret; -} - -/* - * High-secure bootloader blob creation - */ - -static int -gm200_secboot_prepare_hsbl_blob(struct gm200_secboot *gsb) -{ - struct nvkm_subdev *subdev = &gsb->base.subdev; - - gsb->hsbl_blob = gm200_secboot_load_firmware(subdev, "acr/bl", 0); - if (IS_ERR(gsb->hsbl_blob)) { - int ret = PTR_ERR(gsb->hsbl_blob); - - gsb->hsbl_blob = NULL; - return ret; - } - - return 0; -} - -/** - * gm20x_secboot_prepare_blobs - load blobs common to all GM20X GPUs. - * - * This includes the LS blob, HS ucode loading blob, and HS bootloader. - * - * The HS ucode unload blob is only used on dGPU. - */ -int -gm20x_secboot_prepare_blobs(struct gm200_secboot *gsb) -{ - int ret; - - /* Load and prepare the managed falcon's firmwares */ - if (!gsb->ls_blob) { - ret = gm200_secboot_prepare_ls_blob(gsb); - if (ret) - return ret; - } - - /* Load the HS firmware that will load the LS firmwares */ - if (!gsb->acr_load_blob) { - ret = gm200_secboot_prepare_hs_blob(gsb, "acr/ucode_load", - &gsb->acr_load_blob, - &gsb->load_bl_header, true); - if (ret) - return ret; - } - - /* Load the HS firmware bootloader */ - if (!gsb->hsbl_blob) { - ret = gm200_secboot_prepare_hsbl_blob(gsb); - if (ret) - return ret; - } - - return 0; -} - -static int -gm200_secboot_prepare_blobs(struct gm200_secboot *gsb) -{ - int ret; - - ret = gm20x_secboot_prepare_blobs(gsb); - if (ret) - return ret; - - /* dGPU only: load the HS firmware that unprotects the WPR region */ - if (!gsb->acr_unload_blob) { - ret = gm200_secboot_prepare_hs_blob(gsb, "acr/ucode_unload", - &gsb->acr_unload_blob, - &gsb->unload_bl_header, false); - if (ret) - return ret; - } - - return 0; -} - -static int -gm200_secboot_blobs_ready(struct gm200_secboot *gsb) -{ - struct nvkm_subdev *subdev = &gsb->base.subdev; - int ret; - - /* firmware already loaded, nothing to do... */ - if (gsb->firmware_ok) - return 0; - - ret = gsb->func->prepare_blobs(gsb); - if (ret) { - nvkm_error(subdev, "failed to load secure firmware\n"); - return ret; - } - - gsb->firmware_ok = true; - - return 0; -} - - -/* - * Secure Boot Execution - */ - -/** - * gm200_secboot_load_hs_bl() - load HS bootloader into DMEM and IMEM - */ -static void -gm200_secboot_load_hs_bl(struct gm200_secboot *gsb, void *data, u32 data_size) -{ - struct nvkm_device *device = gsb->base.subdev.device; - struct fw_bin_header *hdr = gsb->hsbl_blob; - struct fw_bl_desc *hsbl_desc = gsb->hsbl_blob + hdr->header_offset; - void *blob_data = gsb->hsbl_blob + hdr->data_offset; - void *hsbl_code = blob_data + hsbl_desc->code_off; - void *hsbl_data = blob_data + hsbl_desc->data_off; - u32 code_size = ALIGN(hsbl_desc->code_size, 256); - const u32 base = gsb->base.base; - u32 code_start; - - /* - * Copy HS bootloader data - */ - nvkm_falcon_load_dmem(device, gsb->base.base, hsbl_data, 0x00000, - hsbl_desc->data_size); - - /* - * Copy HS bootloader interface structure where the HS descriptor - * expects it to be - */ - nvkm_falcon_load_dmem(device, gsb->base.base, data, - hsbl_desc->dmem_load_off, data_size); - - /* Copy HS bootloader code to end of IMEM */ - code_start = (nvkm_rd32(device, base + 0x108) & 0x1ff) << 8; - code_start -= code_size; - nvkm_falcon_load_imem(device, gsb->base.base, hsbl_code, code_start, - code_size, hsbl_desc->start_tag); -} - -/** * gm200_secboot_setup_falcon() - set up the secure falcon for secure boot */ static int -gm200_secboot_setup_falcon(struct gm200_secboot *gsb) +gm200_secboot_setup_falcon(struct gm200_secboot *gsb, struct nvkm_acr *acr) { struct nvkm_device *device = gsb->base.subdev.device; - struct fw_bin_header *hdr = gsb->hsbl_blob; - struct fw_bl_desc *hsbl_desc = gsb->hsbl_blob + hdr->header_offset; - /* virtual start address for boot vector */ - u32 virt_addr = hsbl_desc->start_tag << 8; const u32 base = gsb->base.base; const u32 reg_base = base + 0xe00; u32 inst_loc; @@ -1075,135 +65,52 @@ gm200_secboot_setup_falcon(struct gm200_secboot *gsb) (inst_loc << 28) | (1 << 30)); /* Set boot vector to code's starting virtual address */ - nvkm_wr32(device, base + 0x104, virt_addr); + nvkm_wr32(device, base + 0x104, acr->start_address); + + /* Clear mailbox register used to reflect capabilities */ + nvkm_wr32(device, base + 0x044, 0x0); return 0; } /** - * gm200_secboot_run_hs_blob() - run the given high-secure blob + * gm200_secboot_run_blob() - run the given high-secure blob + * */ -static int -gm200_secboot_run_hs_blob(struct gm200_secboot *gsb, struct nvkm_gpuobj *blob) +int +gm200_secboot_run_blob(struct nvkm_secboot *sb, struct nvkm_gpuobj *blob) { + struct gm200_secboot *gsb = gm200_secboot(sb); struct nvkm_vma vma; - const u32 bl_desc_size = gsb->func->bl_desc_size; - const struct hsf_load_header *load_hdr; - u8 bl_desc[bl_desc_size]; int ret; - /* Find the bootloader descriptor for our blob and copy it */ - if (blob == gsb->acr_load_blob) { - load_hdr = &gsb->load_bl_header; - - } else if (blob == gsb->acr_unload_blob) { - load_hdr = &gsb->unload_bl_header; - } else { - nvkm_error(&gsb->base.subdev, "invalid secure boot blob!\n"); - return -EINVAL; - } - /* Map the HS firmware so the HS bootloader can see it */ ret = nvkm_gpuobj_map(blob, gsb->vm, NV_MEM_ACCESS_RW, &vma); if (ret) return ret; - /* Generate the BL header */ - gsb->func->generate_bl_desc(load_hdr, bl_desc, vma.offset); - /* Reset the falcon and make it ready to run the HS bootloader */ - ret = gm200_secboot_setup_falcon(gsb); + ret = gm200_secboot_setup_falcon(gsb, sb->acr); if (ret) - goto end; + goto done; /* Load the HS bootloader into the falcon's IMEM/DMEM */ - gm200_secboot_load_hs_bl(gsb, &bl_desc, bl_desc_size); + ret = sb->acr->func->load(sb->acr, &gsb->base, blob, vma.offset); + if (ret) + goto done; /* Start the HS bootloader */ - ret = nvkm_secboot_falcon_run(&gsb->base); + ret = nvkm_secboot_falcon_run(sb); if (ret) - goto end; + goto done; -end: +done: /* We don't need the ACR firmware anymore */ nvkm_gpuobj_unmap(&vma); return ret; } -/* - * gm200_secboot_reset() - execute secure boot from the prepared state - * - * Load the HS bootloader and ask the falcon to run it. This will in turn - * load the HS firmware and run it, so once the falcon stops all the managed - * falcons should have their LS firmware loaded and be ready to run. - */ -int -gm200_secboot_reset(struct nvkm_secboot *sb, enum nvkm_falconidx falcon) -{ - struct gm200_secboot *gsb = gm200_secboot(sb); - int ret; - - /* Make sure all blobs are ready */ - ret = gm200_secboot_blobs_ready(gsb); - if (ret) - return ret; - - /* - * Dummy GM200 implementation: perform secure boot each time we are - * called on FECS. Since only FECS and GPCCS are managed and started - * together, this ought to be safe. - * - * Once we have proper PMU firmware and support, this will be changed - * to a proper call to the PMU method. - */ - if (falcon != NVKM_FALCON_FECS) - goto end; - - /* If WPR is set and we have an unload blob, run it to unlock WPR */ - if (gsb->acr_unload_blob && - gsb->falcon_state[NVKM_FALCON_FECS] != NON_SECURE) { - ret = gm200_secboot_run_hs_blob(gsb, gsb->acr_unload_blob); - if (ret) - return ret; - } - - /* Reload all managed falcons */ - ret = gm200_secboot_run_hs_blob(gsb, gsb->acr_load_blob); - if (ret) - return ret; - -end: - gsb->falcon_state[falcon] = RESET; - return 0; -} - -int -gm200_secboot_start(struct nvkm_secboot *sb, enum nvkm_falconidx falcon) -{ - struct gm200_secboot *gsb = gm200_secboot(sb); - int base; - - switch (falcon) { - case NVKM_FALCON_FECS: - base = 0x409000; - break; - case NVKM_FALCON_GPCCS: - base = 0x41a000; - break; - default: - nvkm_error(&sb->subdev, "cannot start unhandled falcon!\n"); - return -EINVAL; - } - - nvkm_wr32(sb->subdev.device, base + 0x130, 0x00000002); - gsb->falcon_state[falcon] = RUNNING; - - return 0; -} - - - int gm200_secboot_oneinit(struct nvkm_secboot *sb) { @@ -1240,23 +147,22 @@ gm200_secboot_oneinit(struct nvkm_secboot *sb) nvkm_wo32(gsb->inst, 0x20c, upper_32_bits(vm_area_len - 1)); nvkm_done(gsb->inst); + if (sb->acr->func->oneinit) { + ret = sb->acr->func->oneinit(sb->acr, sb); + if (ret) + return ret; + } + return 0; } -static int +int gm200_secboot_fini(struct nvkm_secboot *sb, bool suspend) { - struct gm200_secboot *gsb = gm200_secboot(sb); int ret = 0; - int i; - /* Run the unload blob to unprotect the WPR region */ - if (gsb->acr_unload_blob && - gsb->falcon_state[NVKM_FALCON_FECS] != NON_SECURE) - ret = gm200_secboot_run_hs_blob(gsb, gsb->acr_unload_blob); - - for (i = 0; i < NVKM_FALCON_END; i++) - gsb->falcon_state[i] = NON_SECURE; + if (sb->acr->func->fini) + ret = sb->acr->func->fini(sb->acr, sb, suspend); return ret; } @@ -1266,11 +172,7 @@ gm200_secboot_dtor(struct nvkm_secboot *sb) { struct gm200_secboot *gsb = gm200_secboot(sb); - nvkm_gpuobj_del(&gsb->acr_unload_blob); - - kfree(gsb->hsbl_blob); - nvkm_gpuobj_del(&gsb->acr_load_blob); - nvkm_gpuobj_del(&gsb->ls_blob); + sb->acr->func->dtor(sb->acr); nvkm_vm_ref(NULL, &gsb->vm, gsb->pgd); nvkm_gpuobj_del(&gsb->pgd); @@ -1285,37 +187,7 @@ gm200_secboot = { .dtor = gm200_secboot_dtor, .oneinit = gm200_secboot_oneinit, .fini = gm200_secboot_fini, - .reset = gm200_secboot_reset, - .start = gm200_secboot_start, - .managed_falcons = BIT(NVKM_FALCON_FECS) | - BIT(NVKM_FALCON_GPCCS), - .boot_falcon = NVKM_FALCON_PMU, -}; - -static void -gm200_secboot_generate_bl_desc(const struct hsf_load_header *hdr, - void *_bl_desc, u64 offset) -{ - struct gm200_flcn_bl_desc *bl_desc = _bl_desc; - - memset(bl_desc, 0, sizeof(*bl_desc)); - bl_desc->ctx_dma = FALCON_DMAIDX_VIRT; - bl_desc->non_sec_code_off = hdr->non_sec_code_off; - bl_desc->non_sec_code_size = hdr->non_sec_code_size; - bl_desc->sec_code_off = hdr->app[0].sec_code_off; - bl_desc->sec_code_size = hdr->app[0].sec_code_size; - bl_desc->code_entry_point = 0; - - bl_desc->code_dma_base = u64_to_flcn64(offset); - bl_desc->data_dma_base = u64_to_flcn64(offset + hdr->data_dma_base); - bl_desc->data_size = hdr->data_size; -} - -static const struct gm200_secboot_func -gm200_secboot_func = { - .bl_desc_size = sizeof(struct gm200_flcn_bl_desc), - .generate_bl_desc = gm200_secboot_generate_bl_desc, - .prepare_blobs = gm200_secboot_prepare_blobs, + .run_blob = gm200_secboot_run_blob, }; int @@ -1324,6 +196,11 @@ gm200_secboot_new(struct nvkm_device *device, int index, { int ret; struct gm200_secboot *gsb; + struct nvkm_acr *acr; + + acr = acr_r361_new(BIT(NVKM_FALCON_FECS) | BIT(NVKM_FALCON_GPCCS)); + if (IS_ERR(acr)) + return PTR_ERR(acr); gsb = kzalloc(sizeof(*gsb), GFP_KERNEL); if (!gsb) { @@ -1332,16 +209,14 @@ gm200_secboot_new(struct nvkm_device *device, int index, } *psb = &gsb->base; - ret = nvkm_secboot_ctor(&gm200_secboot, device, index, &gsb->base); + ret = nvkm_secboot_ctor(&gm200_secboot, acr, device, index, &gsb->base); if (ret) return ret; - gsb->func = &gm200_secboot_func; - gsb->ls_func = &gm200_ls_func; - return 0; } + MODULE_FIRMWARE("nvidia/gm200/acr/bl.bin"); MODULE_FIRMWARE("nvidia/gm200/acr/ucode_load.bin"); MODULE_FIRMWARE("nvidia/gm200/acr/ucode_unload.bin"); diff --git a/drm/nouveau/nvkm/subdev/secboot/gm200.h b/drm/nouveau/nvkm/subdev/secboot/gm200.h new file mode 100644 index 000000000000..45adf1a3bc20 --- /dev/null +++ b/drm/nouveau/nvkm/subdev/secboot/gm200.h @@ -0,0 +1,43 @@ +/* + * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + * DEALINGS IN THE SOFTWARE. + */ + +#ifndef __NVKM_SECBOOT_GM200_H__ +#define __NVKM_SECBOOT_GM200_H__ + +#include "priv.h" + +struct gm200_secboot { + struct nvkm_secboot base; + + /* Instance block & address space used for HS FW execution */ + struct nvkm_gpuobj *inst; + struct nvkm_gpuobj *pgd; + struct nvkm_vm *vm; +}; +#define gm200_secboot(sb) container_of(sb, struct gm200_secboot, base) + +int gm200_secboot_oneinit(struct nvkm_secboot *); +int gm200_secboot_fini(struct nvkm_secboot *, bool); +void *gm200_secboot_dtor(struct nvkm_secboot *); +int gm200_secboot_run_blob(struct nvkm_secboot *, struct nvkm_gpuobj *); + +#endif diff --git a/drm/nouveau/nvkm/subdev/secboot/gm20b.c b/drm/nouveau/nvkm/subdev/secboot/gm20b.c index 403b4d690902..6bd3aff1ffb1 100644 --- a/drm/nouveau/nvkm/subdev/secboot/gm20b.c +++ b/drm/nouveau/nvkm/subdev/secboot/gm20b.c @@ -20,98 +20,8 @@ * DEALINGS IN THE SOFTWARE. */ -#include "priv.h" - -#include <core/gpuobj.h> - -/* - * The BL header format used by GM20B's firmware is slightly different - * from the one of GM200. Fix the differences here. - */ -struct gm20b_flcn_bl_desc { - u32 reserved[4]; - u32 signature[4]; - u32 ctx_dma; - u32 code_dma_base; - u32 non_sec_code_off; - u32 non_sec_code_size; - u32 sec_code_off; - u32 sec_code_size; - u32 code_entry_point; - u32 data_dma_base; - u32 data_size; -}; - -static void -gm20b_secboot_ls_bl_desc(const struct ls_ucode_img *img, u64 wpr_addr, - void *_desc) -{ - struct gm20b_flcn_bl_desc *desc = _desc; - const struct ls_ucode_img_desc *pdesc = &img->ucode_desc; - u64 base; - - base = wpr_addr + img->lsb_header.ucode_off + pdesc->app_start_offset; - - memset(desc, 0, sizeof(*desc)); - desc->ctx_dma = FALCON_DMAIDX_UCODE; - desc->code_dma_base = (base + pdesc->app_resident_code_offset) >> 8; - desc->non_sec_code_size = pdesc->app_resident_code_size; - desc->data_dma_base = (base + pdesc->app_resident_data_offset) >> 8; - desc->data_size = pdesc->app_resident_data_size; - desc->code_entry_point = pdesc->app_imem_entry; -} - -static int -gm20b_secboot_prepare_blobs(struct gm200_secboot *gsb) -{ - struct nvkm_subdev *subdev = &gsb->base.subdev; - int acr_size; - int ret; - - ret = gm20x_secboot_prepare_blobs(gsb); - if (ret) - return ret; - - acr_size = gsb->acr_load_blob->size; - /* - * On Tegra the WPR region is set by the bootloader. It is illegal for - * the HS blob to be larger than this region. - */ - if (acr_size > gsb->wpr_size) { - nvkm_error(subdev, "WPR region too small for FW blob!\n"); - nvkm_error(subdev, "required: %dB\n", acr_size); - nvkm_error(subdev, "WPR size: %dB\n", gsb->wpr_size); - return -ENOSPC; - } - - return 0; -} - -static void -gm20b_secboot_generate_bl_desc(const struct hsf_load_header *load_hdr, - void *_bl_desc, u64 offset) -{ - struct gm20b_flcn_bl_desc *bl_desc = _bl_desc; - - memset(bl_desc, 0, sizeof(*bl_desc)); - bl_desc->ctx_dma = FALCON_DMAIDX_VIRT; - bl_desc->non_sec_code_off = load_hdr->non_sec_code_off; - bl_desc->non_sec_code_size = load_hdr->non_sec_code_size; - bl_desc->sec_code_off = load_hdr->app[0].sec_code_off; - bl_desc->sec_code_size = load_hdr->app[0].sec_code_size; - bl_desc->code_entry_point = 0; - bl_desc->code_dma_base = offset >> 8; - bl_desc->data_dma_base = (offset + load_hdr->data_dma_base) >> 8; - bl_desc->data_size = load_hdr->data_size; -} - -static const struct gm200_secboot_func -gm20b_secboot_func = { - .bl_desc_size = sizeof(struct gm20b_flcn_bl_desc), - .generate_bl_desc = gm20b_secboot_generate_bl_desc, - .prepare_blobs = gm20b_secboot_prepare_blobs, -}; - +#include "acr.h" +#include "gm200.h" #ifdef CONFIG_ARCH_TEGRA #define TEGRA_MC_BASE 0x70019000 @@ -139,15 +49,15 @@ gm20b_tegra_read_wpr(struct gm200_secboot *gsb) nvkm_error(&sb->subdev, "Cannot map Tegra MC registers\n"); return PTR_ERR(mc); } - gsb->wpr_addr = ioread32_native(mc + MC_SECURITY_CARVEOUT2_BOM_0) | + sb->wpr_addr = ioread32_native(mc + MC_SECURITY_CARVEOUT2_BOM_0) | ((u64)ioread32_native(mc + MC_SECURITY_CARVEOUT2_BOM_HI_0) << 32); - gsb->wpr_size = ioread32_native(mc + MC_SECURITY_CARVEOUT2_SIZE_128K) + sb->wpr_size = ioread32_native(mc + MC_SECURITY_CARVEOUT2_SIZE_128K) << 17; cfg = ioread32_native(mc + MC_SECURITY_CARVEOUT2_CFG0); iounmap(mc); /* Check that WPR settings are valid */ - if (gsb->wpr_size == 0) { + if (sb->wpr_size == 0) { nvkm_error(&sb->subdev, "WPR region is empty\n"); return -EINVAL; } @@ -185,19 +95,8 @@ static const struct nvkm_secboot_func gm20b_secboot = { .dtor = gm200_secboot_dtor, .oneinit = gm20b_secboot_oneinit, - .reset = gm200_secboot_reset, - .start = gm200_secboot_start, - .managed_falcons = BIT(NVKM_FALCON_FECS), - .boot_falcon = NVKM_FALCON_PMU, -}; - -static const secboot_ls_func -gm20b_ls_func = { - [NVKM_FALCON_FECS] = &(struct secboot_ls_single_func) { - .load = gm200_ls_load_fecs, - .generate_bl_desc = gm20b_secboot_ls_bl_desc, - .bl_desc_size = sizeof(struct gm20b_flcn_bl_desc), - }, + .fini = gm200_secboot_fini, + .run_blob = gm200_secboot_run_blob, }; int @@ -206,6 +105,11 @@ gm20b_secboot_new(struct nvkm_device *device, int index, { int ret; struct gm200_secboot *gsb; + struct nvkm_acr *acr; + + acr = acr_r352_new(BIT(NVKM_FALCON_FECS)); + if (IS_ERR(acr)) + return PTR_ERR(acr); gsb = kzalloc(sizeof(*gsb), GFP_KERNEL); if (!gsb) { @@ -214,13 +118,10 @@ gm20b_secboot_new(struct nvkm_device *device, int index, } *psb = &gsb->base; - ret = nvkm_secboot_ctor(&gm20b_secboot, device, index, &gsb->base); + ret = nvkm_secboot_ctor(&gm20b_secboot, acr, device, index, &gsb->base); if (ret) return ret; - gsb->func = &gm20b_secboot_func; - gsb->ls_func = &gm20b_ls_func; - return 0; } diff --git a/drm/nouveau/nvkm/subdev/secboot/ls_ucode.h b/drm/nouveau/nvkm/subdev/secboot/ls_ucode.h new file mode 100644 index 000000000000..0518371a287c --- /dev/null +++ b/drm/nouveau/nvkm/subdev/secboot/ls_ucode.h @@ -0,0 +1,245 @@ +/* + * Copyright (c) 2014, NVIDIA CORPORATION. All rights reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + * DEALINGS IN THE SOFTWARE. + */ + +#ifndef __NVKM_SECBOOT_LS_UCODE_H__ +#define __NVKM_SECBOOT_LS_UCODE_H__ + +#include <core/os.h> +#include <core/falcon.h> +#include <core/subdev.h> + +/* + * + * LS blob structures + * + */ + +/** + * struct lsf_ucode_desc - LS falcon signatures + * @prd_keys: signature to use when the GPU is in production mode + * @dgb_keys: signature to use when the GPU is in debug mode + * @b_prd_present: whether the production key is present + * @b_dgb_present: whether the debug key is present + * @falcon_id: ID of the falcon the ucode applies to + * + * Directly loaded from a signature file. + */ +struct lsf_ucode_desc { + u8 prd_keys[2][16]; + u8 dbg_keys[2][16]; + u32 b_prd_present; + u32 b_dbg_present; + u32 falcon_id; +}; + +/** + * struct lsf_lsb_header - LS firmware header + * @signature: signature to verify the firmware against + * @ucode_off: offset of the ucode blob in the WPR region. The ucode + * blob contains the bootloader, code and data of the + * LS falcon + * @ucode_size: size of the ucode blob, including bootloader + * @data_size: size of the ucode blob data + * @bl_code_size: size of the bootloader code + * @bl_imem_off: offset in imem of the bootloader + * @bl_data_off: offset of the bootloader data in WPR region + * @bl_data_size: size of the bootloader data + * @app_code_off: offset of the app code relative to ucode_off + * @app_code_size: size of the app code + * @app_data_off: offset of the app data relative to ucode_off + * @app_data_size: size of the app data + * @flags: flags for the secure bootloader + * + * This structure is written into the WPR region for each managed falcon. Each + * instance is referenced by the lsb_offset member of the corresponding + * lsf_wpr_header. + */ +struct lsf_lsb_header { + struct lsf_ucode_desc signature; + u32 ucode_off; + u32 ucode_size; + u32 data_size; + u32 bl_code_size; + u32 bl_imem_off; + u32 bl_data_off; + u32 bl_data_size; + u32 app_code_off; + u32 app_code_size; + u32 app_data_off; + u32 app_data_size; + u32 flags; +#define LSF_FLAG_LOAD_CODE_AT_0 1 +#define LSF_FLAG_DMACTL_REQ_CTX 4 +#define LSF_FLAG_FORCE_PRIV_LOAD 8 +}; + +/** + * struct lsf_wpr_header - LS blob WPR Header + * @falcon_id: LS falcon ID + * @lsb_offset: offset of the lsb_lsf_header in the WPR region + * @bootstrap_owner: secure falcon reponsible for bootstrapping the LS falcon + * @lazy_bootstrap: skip bootstrapping by ACR + * @status: bootstrapping status + * + * An array of these is written at the beginning of the WPR region, one for + * each managed falcon. The array is terminated by an instance which falcon_id + * is LSF_FALCON_ID_INVALID. + */ +struct lsf_wpr_header { + u32 falcon_id; + u32 lsb_offset; + u32 bootstrap_owner; + u32 lazy_bootstrap; + u32 status; +#define LSF_IMAGE_STATUS_NONE 0 +#define LSF_IMAGE_STATUS_COPY 1 +#define LSF_IMAGE_STATUS_VALIDATION_CODE_FAILED 2 +#define LSF_IMAGE_STATUS_VALIDATION_DATA_FAILED 3 +#define LSF_IMAGE_STATUS_VALIDATION_DONE 4 +#define LSF_IMAGE_STATUS_VALIDATION_SKIPPED 5 +#define LSF_IMAGE_STATUS_BOOTSTRAP_READY 6 +}; + + +/** + * struct ls_ucode_img_desc - descriptor of firmware image + * @descriptor_size: size of this descriptor + * @image_size: size of the whole image + * @bootloader_start_offset: start offset of the bootloader in ucode image + * @bootloader_size: size of the bootloader + * @bootloader_imem_offset: start off set of the bootloader in IMEM + * @bootloader_entry_point: entry point of the bootloader in IMEM + * @app_start_offset: start offset of the LS firmware + * @app_size: size of the LS firmware's code and data + * @app_imem_offset: offset of the app in IMEM + * @app_imem_entry: entry point of the app in IMEM + * @app_dmem_offset: offset of the data in DMEM + * @app_resident_code_offset: offset of app code from app_start_offset + * @app_resident_code_size: size of the code + * @app_resident_data_offset: offset of data from app_start_offset + * @app_resident_data_size: size of data + * + * A firmware image contains the code, data, and bootloader of a given LS + * falcon in a single blob. This structure describes where everything is. + * + * This can be generated from a (bootloader, code, data) set if they have + * been loaded separately, or come directly from a file. + */ +struct ls_ucode_img_desc { + u32 descriptor_size; + u32 image_size; + u32 tools_version; + u32 app_version; + char date[64]; + u32 bootloader_start_offset; + u32 bootloader_size; + u32 bootloader_imem_offset; + u32 bootloader_entry_point; + u32 app_start_offset; + u32 app_size; + u32 app_imem_offset; + u32 app_imem_entry; + u32 app_dmem_offset; + u32 app_resident_code_offset; + u32 app_resident_code_size; + u32 app_resident_data_offset; + u32 app_resident_data_size; + u32 nb_overlays; + struct {u32 start; u32 size; } load_ovl[64]; + u32 compressed; +}; + +/** + * struct ls_ucode_img - temporary storage for loaded LS firmwares + * @node: to link within lsf_ucode_mgr + * @falcon_id: ID of the falcon this LS firmware is for + * @ucode_desc: loaded or generated map of ucode_data + * @ucode_header: header of the firmware + * @ucode_data: firmware payload (code and data) + * @ucode_size: size in bytes of data in ucode_data + * @wpr_header: WPR header to be written to the LS blob + * @lsb_header: LSB header to be written to the LS blob + * + * Preparing the WPR LS blob requires information about all the LS firmwares + * (size, etc) to be known. This structure contains all the data of one LS + * firmware. + */ +struct ls_ucode_img { + struct list_head node; + enum nvkm_falconidx falcon_id; + + struct ls_ucode_img_desc ucode_desc; + u32 *ucode_header; + u8 *ucode_data; + u32 ucode_size; + + struct lsf_wpr_header wpr_header; + struct lsf_lsb_header lsb_header; +}; + +/** + * struct fw_bin_header - header of firmware files + * @bin_magic: always 0x3b1d14f0 + * @bin_ver: version of the bin format + * @bin_size: entire image size including this header + * @header_offset: offset of the firmware/bootloader header in the file + * @data_offset: offset of the firmware/bootloader payload in the file + * @data_size: size of the payload + * + * This header is located at the beginning of the HS firmware and HS bootloader + * files, to describe where the headers and data can be found. + */ +struct fw_bin_header { + u32 bin_magic; + u32 bin_ver; + u32 bin_size; + u32 header_offset; + u32 data_offset; + u32 data_size; +}; + +/** + * struct fw_bl_desc - firmware bootloader descriptor + * @start_tag: starting tag of bootloader + * @desc_dmem_load_off: DMEM offset of flcn_bl_dmem_desc + * @code_off: offset of code section + * @code_size: size of code section + * @data_off: offset of data section + * @data_size: size of data section + * + * This structure is embedded in bootloader firmware files at to describe the + * IMEM and DMEM layout expected by the bootloader. + */ +struct fw_bl_desc { + u32 start_tag; + u32 dmem_load_off; + u32 code_off; + u32 code_size; + u32 data_off; + u32 data_size; +}; + +int acr_ls_ucode_load_fecs(const struct nvkm_subdev *, struct ls_ucode_img *); +int acr_ls_ucode_load_gpccs(const struct nvkm_subdev *, struct ls_ucode_img *); + + +#endif diff --git a/drm/nouveau/nvkm/subdev/secboot/ls_ucode_gr.c b/drm/nouveau/nvkm/subdev/secboot/ls_ucode_gr.c new file mode 100644 index 000000000000..09f5f1f1a50d --- /dev/null +++ b/drm/nouveau/nvkm/subdev/secboot/ls_ucode_gr.c @@ -0,0 +1,165 @@ +/* + * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + * DEALINGS IN THE SOFTWARE. + */ + + +#include "ls_ucode.h" +#include "acr.h" + +#include <core/firmware.h> + +#define BL_DESC_BLK_SIZE 256 +/** + * Build a ucode image and descriptor from provided bootloader, code and data. + * + * @bl: bootloader image, including 16-bytes descriptor + * @code: LS firmware code segment + * @data: LS firmware data segment + * @desc: ucode descriptor to be written + * + * Return: allocated ucode image with corresponding descriptor information. desc + * is also updated to contain the right offsets within returned image. + */ +static void * +ls_ucode_img_build(const struct firmware *bl, const struct firmware *code, + const struct firmware *data, struct ls_ucode_img_desc *desc) +{ + struct fw_bin_header *bin_hdr = (void *)bl->data; + struct fw_bl_desc *bl_desc = (void *)bl->data + bin_hdr->header_offset; + void *bl_data = (void *)bl->data + bin_hdr->data_offset; + u32 pos = 0; + void *image; + + desc->bootloader_start_offset = pos; + desc->bootloader_size = ALIGN(bl_desc->code_size, sizeof(u32)); + desc->bootloader_imem_offset = bl_desc->start_tag * 256; + desc->bootloader_entry_point = bl_desc->start_tag * 256; + + pos = ALIGN(pos + desc->bootloader_size, BL_DESC_BLK_SIZE); + desc->app_start_offset = pos; + desc->app_size = ALIGN(code->size, BL_DESC_BLK_SIZE) + + ALIGN(data->size, BL_DESC_BLK_SIZE); + desc->app_imem_offset = 0; + desc->app_imem_entry = 0; + desc->app_dmem_offset = 0; + desc->app_resident_code_offset = 0; + desc->app_resident_code_size = ALIGN(code->size, BL_DESC_BLK_SIZE); + + pos = ALIGN(pos + desc->app_resident_code_size, BL_DESC_BLK_SIZE); + desc->app_resident_data_offset = pos - desc->app_start_offset; + desc->app_resident_data_size = ALIGN(data->size, BL_DESC_BLK_SIZE); + + desc->image_size = ALIGN(bl_desc->code_size, BL_DESC_BLK_SIZE) + + desc->app_size; + + image = kzalloc(desc->image_size, GFP_KERNEL); + if (!image) + return ERR_PTR(-ENOMEM); + + memcpy(image + desc->bootloader_start_offset, bl_data, + bl_desc->code_size); + memcpy(image + desc->app_start_offset, code->data, code->size); + memcpy(image + desc->app_start_offset + desc->app_resident_data_offset, + data->data, data->size); + + return image; +} + +/** + * ls_ucode_img_load_gr() - load and prepare a LS GR ucode image + * + * Load the LS microcode, bootloader and signature and pack them into a single + * blob. Also generate the corresponding ucode descriptor. + */ +static int +ls_ucode_img_load_gr(const struct nvkm_subdev *subdev, struct ls_ucode_img *img, + const char *falcon_name, const u32 falcon_id) +{ + const struct firmware *bl, *code, *data; + struct lsf_ucode_desc *lsf_desc; + char f[64]; + int ret; + + img->ucode_header = NULL; + + snprintf(f, sizeof(f), "gr/%s_bl", falcon_name); + ret = nvkm_firmware_get(subdev->device, f, &bl); + if (ret) + goto error; + + snprintf(f, sizeof(f), "gr/%s_inst", falcon_name); + ret = nvkm_firmware_get(subdev->device, f, &code); + if (ret) + goto free_bl; + + snprintf(f, sizeof(f), "gr/%s_data", falcon_name); + ret = nvkm_firmware_get(subdev->device, f, &data); + if (ret) + goto free_inst; + + img->ucode_data = ls_ucode_img_build(bl, code, data, + &img->ucode_desc); + if (IS_ERR(img->ucode_data)) { + ret = PTR_ERR(img->ucode_data); + goto free_data; + } + img->ucode_size = img->ucode_desc.image_size; + + snprintf(f, sizeof(f), "gr/%s_sig", falcon_name); + lsf_desc = nvkm_acr_load_firmware(subdev, f, sizeof(*lsf_desc)); + if (IS_ERR(lsf_desc)) { + ret = PTR_ERR(lsf_desc); + goto free_image; + } + /* not needed? the signature should already have the right value */ + lsf_desc->falcon_id = falcon_id; + memcpy(&img->lsb_header.signature, lsf_desc, sizeof(*lsf_desc)); + img->falcon_id = lsf_desc->falcon_id; + kfree(lsf_desc); + + /* success path - only free requested firmware files */ + goto free_data; + +free_image: + kfree(img->ucode_data); +free_data: + nvkm_firmware_put(data); +free_inst: + nvkm_firmware_put(code); +free_bl: + nvkm_firmware_put(bl); +error: + return ret; +} + +int +acr_ls_ucode_load_fecs(const struct nvkm_subdev *subdev, + struct ls_ucode_img *img) +{ + return ls_ucode_img_load_gr(subdev, img, "fecs", NVKM_FALCON_FECS); +} + +int +acr_ls_ucode_load_gpccs(const struct nvkm_subdev *subdev, + struct ls_ucode_img *img) +{ + return ls_ucode_img_load_gr(subdev, img, "gpccs", NVKM_FALCON_GPCCS); +} diff --git a/drm/nouveau/nvkm/subdev/secboot/priv.h b/drm/nouveau/nvkm/subdev/secboot/priv.h index 1922422fd539..75a3b995fdbb 100644 --- a/drm/nouveau/nvkm/subdev/secboot/priv.h +++ b/drm/nouveau/nvkm/subdev/secboot/priv.h @@ -30,188 +30,14 @@ struct nvkm_secboot_func { int (*oneinit)(struct nvkm_secboot *); int (*fini)(struct nvkm_secboot *, bool suspend); void *(*dtor)(struct nvkm_secboot *); - int (*reset)(struct nvkm_secboot *, enum nvkm_falconidx); - int (*start)(struct nvkm_secboot *, enum nvkm_falconidx); - - /* ID of the falcon that will perform secure boot */ - enum nvkm_falconidx boot_falcon; - /* Bit-mask of IDs of managed falcons */ - unsigned long managed_falcons; + int (*run_blob)(struct nvkm_secboot *, struct nvkm_gpuobj *); }; -int nvkm_secboot_ctor(const struct nvkm_secboot_func *, struct nvkm_device *, - int index, struct nvkm_secboot *); +int nvkm_secboot_ctor(const struct nvkm_secboot_func *, struct nvkm_acr *, + struct nvkm_device *, int, struct nvkm_secboot *); int nvkm_secboot_falcon_reset(struct nvkm_secboot *); int nvkm_secboot_falcon_run(struct nvkm_secboot *); -/* - * - * LS blob structures - * - */ - -/** - * struct lsf_ucode_desc - LS falcon signatures - * @prd_keys: signature to use when the GPU is in production mode - * @dgb_keys: signature to use when the GPU is in debug mode - * @b_prd_present: whether the production key is present - * @b_dgb_present: whether the debug key is present - * @falcon_id: ID of the falcon the ucode applies to - * - * Directly loaded from a signature file. - */ -struct lsf_ucode_desc { - u8 prd_keys[2][16]; - u8 dbg_keys[2][16]; - u32 b_prd_present; - u32 b_dbg_present; - u32 falcon_id; -}; - -/** - * struct lsf_lsb_header - LS firmware header - * @signature: signature to verify the firmware against - * @ucode_off: offset of the ucode blob in the WPR region. The ucode - * blob contains the bootloader, code and data of the - * LS falcon - * @ucode_size: size of the ucode blob, including bootloader - * @data_size: size of the ucode blob data - * @bl_code_size: size of the bootloader code - * @bl_imem_off: offset in imem of the bootloader - * @bl_data_off: offset of the bootloader data in WPR region - * @bl_data_size: size of the bootloader data - * @app_code_off: offset of the app code relative to ucode_off - * @app_code_size: size of the app code - * @app_data_off: offset of the app data relative to ucode_off - * @app_data_size: size of the app data - * @flags: flags for the secure bootloader - * - * This structure is written into the WPR region for each managed falcon. Each - * instance is referenced by the lsb_offset member of the corresponding - * lsf_wpr_header. - */ -struct lsf_lsb_header { - struct lsf_ucode_desc signature; - u32 ucode_off; - u32 ucode_size; - u32 data_size; - u32 bl_code_size; - u32 bl_imem_off; - u32 bl_data_off; - u32 bl_data_size; - u32 app_code_off; - u32 app_code_size; - u32 app_data_off; - u32 app_data_size; - u32 flags; -#define LSF_FLAG_LOAD_CODE_AT_0 1 -#define LSF_FLAG_DMACTL_REQ_CTX 4 -#define LSF_FLAG_FORCE_PRIV_LOAD 8 -}; - -/** - * struct lsf_wpr_header - LS blob WPR Header - * @falcon_id: LS falcon ID - * @lsb_offset: offset of the lsb_lsf_header in the WPR region - * @bootstrap_owner: secure falcon reponsible for bootstrapping the LS falcon - * @lazy_bootstrap: skip bootstrapping by ACR - * @status: bootstrapping status - * - * An array of these is written at the beginning of the WPR region, one for - * each managed falcon. The array is terminated by an instance which falcon_id - * is LSF_FALCON_ID_INVALID. - */ -struct lsf_wpr_header { - u32 falcon_id; - u32 lsb_offset; - u32 bootstrap_owner; - u32 lazy_bootstrap; - u32 status; -#define LSF_IMAGE_STATUS_NONE 0 -#define LSF_IMAGE_STATUS_COPY 1 -#define LSF_IMAGE_STATUS_VALIDATION_CODE_FAILED 2 -#define LSF_IMAGE_STATUS_VALIDATION_DATA_FAILED 3 -#define LSF_IMAGE_STATUS_VALIDATION_DONE 4 -#define LSF_IMAGE_STATUS_VALIDATION_SKIPPED 5 -#define LSF_IMAGE_STATUS_BOOTSTRAP_READY 6 -}; - - -/** - * struct ls_ucode_img_desc - descriptor of firmware image - * @descriptor_size: size of this descriptor - * @image_size: size of the whole image - * @bootloader_start_offset: start offset of the bootloader in ucode image - * @bootloader_size: size of the bootloader - * @bootloader_imem_offset: start off set of the bootloader in IMEM - * @bootloader_entry_point: entry point of the bootloader in IMEM - * @app_start_offset: start offset of the LS firmware - * @app_size: size of the LS firmware's code and data - * @app_imem_offset: offset of the app in IMEM - * @app_imem_entry: entry point of the app in IMEM - * @app_dmem_offset: offset of the data in DMEM - * @app_resident_code_offset: offset of app code from app_start_offset - * @app_resident_code_size: size of the code - * @app_resident_data_offset: offset of data from app_start_offset - * @app_resident_data_size: size of data - * - * A firmware image contains the code, data, and bootloader of a given LS - * falcon in a single blob. This structure describes where everything is. - * - * This can be generated from a (bootloader, code, data) set if they have - * been loaded separately, or come directly from a file. - */ -struct ls_ucode_img_desc { - u32 descriptor_size; - u32 image_size; - u32 tools_version; - u32 app_version; - char date[64]; - u32 bootloader_start_offset; - u32 bootloader_size; - u32 bootloader_imem_offset; - u32 bootloader_entry_point; - u32 app_start_offset; - u32 app_size; - u32 app_imem_offset; - u32 app_imem_entry; - u32 app_dmem_offset; - u32 app_resident_code_offset; - u32 app_resident_code_size; - u32 app_resident_data_offset; - u32 app_resident_data_size; - u32 nb_overlays; - struct {u32 start; u32 size; } load_ovl[64]; - u32 compressed; -}; - -/** - * struct ls_ucode_img - temporary storage for loaded LS firmwares - * @node: to link within lsf_ucode_mgr - * @falcon_id: ID of the falcon this LS firmware is for - * @ucode_desc: loaded or generated map of ucode_data - * @ucode_header: header of the firmware - * @ucode_data: firmware payload (code and data) - * @ucode_size: size in bytes of data in ucode_data - * @wpr_header: WPR header to be written to the LS blob - * @lsb_header: LSB header to be written to the LS blob - * - * Preparing the WPR LS blob requires information about all the LS firmwares - * (size, etc) to be known. This structure contains all the data of one LS - * firmware. - */ -struct ls_ucode_img { - struct list_head node; - enum nvkm_falconidx falcon_id; - - struct ls_ucode_img_desc ucode_desc; - u32 *ucode_header; - u8 *ucode_data; - u32 ucode_size; - - struct lsf_wpr_header wpr_header; - struct lsf_lsb_header lsb_header; -}; struct flcn_u64 { u32 lo; @@ -233,150 +59,4 @@ static inline struct flcn_u64 u64_to_flcn64(u64 u) return ret; } -#define GM200_ACR_MAX_APPS 8 - -struct hsf_load_header_app { - u32 sec_code_off; - u32 sec_code_size; -}; - -/** - * struct hsf_load_header - HS firmware load header - */ -struct hsf_load_header { - u32 non_sec_code_off; - u32 non_sec_code_size; - u32 data_dma_base; - u32 data_size; - u32 num_apps; - struct hsf_load_header_app app[0]; -}; - -/** - * struct secboot_ls_single_func - manages a single LS firmware - * - * @load: load the external firmware into a ls_ucode_img - * @generate_bl_desc: function called on a block of bl_desc_size to generate the - * proper bootloader descriptor for this LS firmware - * @bl_desc_size: size of the bootloader descriptor - */ -struct secboot_ls_single_func { - int (*load)(const struct nvkm_subdev *, struct ls_ucode_img *); - void (*generate_bl_desc)(const struct ls_ucode_img *, u64, void *); - u32 bl_desc_size; -}; - -/** - * typedef secboot_ls_func - manages all the LS firmwares for this ACR - */ -typedef const struct secboot_ls_single_func * -secboot_ls_func[NVKM_FALCON_END]; - -int gm200_ls_load_fecs(const struct nvkm_subdev *, struct ls_ucode_img *); -int gm200_ls_load_gpccs(const struct nvkm_subdev *, struct ls_ucode_img *); - -/** - * Contains the whole secure boot state, allowing it to be performed as needed - * @wpr_addr: physical address of the WPR region - * @wpr_size: size in bytes of the WPR region - * @ls_blob: LS blob of all the LS firmwares, signatures, bootloaders - * @ls_blob_size: size of the LS blob - * @ls_blob_nb_regions: number of LS firmwares that will be loaded - * @acr_blob: HS blob - * @acr_blob_vma: mapping of the HS blob into the secure falcon's VM - * @acr_bl_desc: bootloader descriptor of the HS blob - * @hsbl_blob: HS blob bootloader - * @inst: instance block for HS falcon - * @pgd: page directory for the HS falcon - * @vm: address space used by the HS falcon - * @falcon_state: current state of the managed falcons - * @firmware_ok: whether the firmware blobs have been created - */ -struct gm200_secboot { - struct nvkm_secboot base; - const struct gm200_secboot_func *func; - const secboot_ls_func *ls_func; - - /* - * Address and size of the fixed WPR region, if any. On Tegra this - * region is set by the bootloader - */ - u64 wpr_addr; - u32 wpr_size; - - /* - * Address and size of the actual WPR region. - */ - u64 acr_wpr_addr; - u32 acr_wpr_size; - - /* - * HS FW - lock WPR region (dGPU only) and load LS FWs - * on Tegra the HS FW copies the LS blob into the fixed WPR instead - */ - struct nvkm_gpuobj *acr_load_blob; - struct { - struct hsf_load_header load_bl_header; - struct hsf_load_header_app __load_apps[GM200_ACR_MAX_APPS]; - }; - - /* HS FW - unlock WPR region (dGPU only) */ - struct nvkm_gpuobj *acr_unload_blob; - struct { - struct hsf_load_header unload_bl_header; - struct hsf_load_header_app __unload_apps[GM200_ACR_MAX_APPS]; - }; - - /* HS bootloader */ - void *hsbl_blob; - - /* LS FWs, to be loaded by the HS ACR */ - struct nvkm_gpuobj *ls_blob; - - /* Instance block & address space used for HS FW execution */ - struct nvkm_gpuobj *inst; - struct nvkm_gpuobj *pgd; - struct nvkm_vm *vm; - - /* To keep track of the state of all managed falcons */ - enum { - /* In non-secure state, no firmware loaded, no privileges*/ - NON_SECURE = 0, - /* In low-secure mode and ready to be started */ - RESET, - /* In low-secure mode and running */ - RUNNING, - } falcon_state[NVKM_FALCON_END]; - - bool firmware_ok; -}; -#define gm200_secboot(sb) container_of(sb, struct gm200_secboot, base) - -/** - * Contains functions we wish to abstract between GM200-like implementations - * @bl_desc_size: size of the BL descriptor used by this chip. - * @generate_bl_desc: hook that generates the proper BL descriptor format from - * the hsf_load_header format into a preallocated array of - * size bl_desc_size - * @prepare_blobs: prepares the various blobs needed for secure booting - */ -struct gm200_secboot_func { - /* - * Size of the bootloader descriptor for this chip. A block of this - * size is allocated before booting a falcon and the fixup_bl_desc - * callback is called on it - */ - u32 bl_desc_size; - void (*generate_bl_desc)(const struct hsf_load_header *, void *, u64); - - int (*prepare_blobs)(struct gm200_secboot *); -}; - -int gm200_secboot_oneinit(struct nvkm_secboot *); -void *gm200_secboot_dtor(struct nvkm_secboot *); -int gm200_secboot_reset(struct nvkm_secboot *, u32); -int gm200_secboot_start(struct nvkm_secboot *, u32); - -int gm20x_secboot_prepare_blobs(struct gm200_secboot *); - #endif -- git-series 0.8.10