1 From 359a25fe7208803d997488a96714c19e84b74f96 Mon Sep 17 00:00:00 2001
2 From: John Cox <jc@kynesim.co.uk>
3 Date: Thu, 5 Mar 2020 18:30:41 +0000
4 Subject: [PATCH] staging: media: rpivid: Add Raspberry Pi V4L2 H265
7 This driver is for the HEVC/H265 decoder block on the Raspberry
8 Pi 4, and conforms to the V4L2 stateless decoder API.
10 Signed-off-by: John Cox <jc@kynesim.co.uk>
12 staging: media: rpivid: Select MEDIA_CONTROLLER and MEDIA_CONTROLLER_REQUEST_API
14 MEDIA_CONTROLLER_REQUEST_API is a hidden option. If rpivid depends on it,
15 the user would need to first enable another driver that selects
16 MEDIA_CONTROLLER_REQUEST_API, and only then rpivid would become available.
18 By selecting it instead of depending on it, it becomes possible to enable
19 rpivid without having to enable other potentially unnecessary drivers.
21 Signed-off-by: Hristo Venev <hristo@venev.name>
23 rpivid_h265: Fix width/height typo
25 Signed-off-by: popcornmix <popcornmix@gmail.com>
27 rpivid_h625: Fix build warnings
29 Signed-off-by: Phil Elwell <phil@raspberrypi.com>
31 staging: rpivid: Fix crash when CMA alloc fails
33 If realloc to increase coeff size fails then attempt to re-allocate
34 the original size. If that also fails then flag a fatal error to abort
37 Signed-off-by: John Cox <jc@kynesim.co.uk>
39 rpivid: Request maximum hevc clock
41 Query maximum and minimum clock from driver
44 Signed-off-by: Dom Cobley <popcornmix@gmail.com>
46 rpivid: Switch to new clock api
48 Signed-off-by: Dom Cobley <popcornmix@gmail.com>
50 rpivid: Only clk_request_done once
52 Fixes: 25486f49bfe2e3ae13b90478d1eebd91413136ad
53 Signed-off-by: Dom Cobley <popcornmix@gmail.com>
55 media: rpivid: Remove the need to have num_entry_points set
57 VAAPI H265 has num entry points but never sets it. Allow a VAAPI
58 shim to work without requiring rewriting the VAAPI driver.
59 num_entry_points can be calculated from the slice_segment_addr
60 of the next slice so delay processing until we have that.
62 Also includes some minor cosmetics.
64 Signed-off-by: John Cox <jc@kynesim.co.uk>
66 media: rpivid: Convert to MPLANE
68 Use multi-planar interface rather than single plane interface. This
69 allows dmabufs holding compressed data to be resized.
71 Signed-off-by: John Cox <jc@kynesim.co.uk>
73 media: rpivid: Add an enable count to irq claim Qs
75 Add an enable count to the irq Q structures to allow the irq logic to
76 block further callbacks if resources associated with the irq are not
79 Signed-off-by: John Cox <jc@kynesim.co.uk>
81 media: rpivid: Add a Pass0 to accumulate slices and rework job finish
83 Due to overheads in assembling controls and requests it is worth having
84 the slice assembly phase separate from the h/w pass1 processing. Create
85 a queue to service pass1 rather than have the pass1 finished callback
86 trigger the next slice job.
88 This requires a rework of the logic that splits up the buffer and
89 request done events. This code contains two ways of doing that, we use
90 Ezequiel Garcias <ezequiel@collabora.com> solution, but expect that
91 in the future this will be handled by the framework in a cleaner manner.
93 Fix up the handling of some of the memory exhaustion crashes uncovered
94 in the process of writing this code.
96 Signed-off-by: John Cox <jc@kynesim.co.uk>
98 media: rpivid: Map cmd buffer directly
100 It is unnecessary to have a separate dmabuf to hold the cmd buffer.
101 Map it directly from the kmalloc.
103 Signed-off-by: John Cox <jc@kynesim.co.uk>
105 media: rpivid: Improve values returned when setting output format
107 Guess a better value for the compressed bitstream buffer size
109 Signed-off-by: John Cox <jc@kynesim.co.uk>
111 media: rpivid: Improve stream_on/off conformance & clock setup
113 Fix stream on & off such that failures leave the driver in the correct
114 state. Ensure that the clock is on when we are streaming and off when
115 all contexts attached to this device have stopped streaming.
117 Signed-off-by: John Cox <jc@kynesim.co.uk>
119 media: rpivid: Improve SPS/PPS error handling/validation
121 Move size and width checking from bitstream processing to control
124 Signed-off-by: John Cox <jc@kynesim.co.uk>
126 media: rpivid: Fix H265 aux ent reuse of the same slot
128 It is legitimate, though unusual, for an aux ent associated with a slot
129 to be selected in phase 0 before a previous selection has been used and
130 released in phase 2. Fix such that if the slot is found to be in use
131 that the aux ent associated with it is reused rather than an new aux
132 ent being created. This fixes a problem where when the first aux ent
133 was released the second was lost track of.
135 This bug spotted in Nick's testing. It may explain some other occasional,
136 unreliable decode error reports where dmesg included "Missing DPB AUX
139 Signed-off-by: John Cox <jc@kynesim.co.uk>
141 media: rpivid: Update to compile with new hevc decode params
143 DPB entries have moved from slice params to the new decode params
144 attribute - update to deal with this. Also fixes fallthrough
145 warnings which seem to be new in 5.14.
147 Signed-off-by: John Cox <jc@kynesim.co.uk>
149 media: rpivid: Make slice ctrl dynamic
151 Allows the user to submit a whole frames worth of slice headers in
152 one lump along with a single bitstream dmabuf for the whole lot.
153 This saves potentially a lot of bitstream copying.
155 Signed-off-by: John Cox <jc@kynesim.co.uk>
157 media: rpivid: Only create aux entries for H265 if needed
159 Only create aux entries of mv info for frames where that info might
160 be used by a later frame. This saves some memory bandwidth and
161 potentially some memory.
163 Signed-off-by: John Cox <jc@kynesim.co.uk>
165 media: rpivid: Avoid returning EINVAL to a G_FMT ioctl
167 V4L2 spec says that G/S/TRY_FMT IOCTLs should never return errors for
168 anything other than wrong buffer types. Improve the capture format
169 function such that this is so and unsupported values get converted
170 to supported ones properly.
172 Signed-off-by: John Cox <jc@kynesim.co.uk>
174 media: rpivid: Remove unused ctx state variable and defines
176 Remove unused ctx state tracking variable and associated defines.
177 Their presence implies they might be used, but they aren't.
179 Signed-off-by: John Cox <jc@kynesim.co.uk>
181 media: rpivid: Ensure IRQs have completed before uniniting context
183 Before uniniting the decode context sync with the IRQ queues to ensure
184 that decode no longer has any buffers in use. This fixes a problem that
185 manifested as ffmpeg leaking CMA buffers when it did a stream off on
186 OUTPUT before CAPTURE, though in reality it was probably much more
189 Signed-off-by: John Cox <jc@kynesim.co.uk>
191 media: rpivid: remove min_buffers_needed from src queue
193 Remove min_buffers_needed=1 from src queue init. Src buffers are bound
194 to media requests therefore this setting is not needed and generates
195 a WARN in kernel 5.16.
197 Signed-off-by: John Cox <jc@kynesim.co.uk>
199 rpivid: Use clk_get_max_rate()
201 The driver was using clk_round_rate() to figure out the maximum clock
202 rate that was allowed for the HEVC clock.
204 Since we have a function to return it directly now, let's use it.
206 Signed-off-by: Maxime Ripard <maxime@cerno.tech>
208 media: rpivid: Apply V4L2 stateless API changes
210 media: rpivid: Fix fallthrough warning
212 Replace old-style /* FALLTHRU */ with fallthrough;
214 Signed-off-by: John Cox <jc@kynesim.co.uk>
216 media: rpivid: Set min value as well as max for HEVC_DECODE_MODE
218 As only one value can be accepted set both min and max to that value.
220 Signed-off-by: John Cox <jc@kynesim.co.uk>
222 media: rpivid: Accept ANNEX_B start codes
224 Allow the START_CODE control to take ANNEX_B as a value. This makes no
225 difference to any part of the decode process as the added bytes are in
226 data that we ignore. This helps my testing and may help userland code
227 that expects to send those bytes.
229 Signed-off-by: John Cox <jc@kynesim.co.uk>
231 rpivid: Convert to new clock rate API
233 Signed-off-by: Maxime Ripard <maxime@cerno.tech>
235 drivers/media/v4l2-core/v4l2-mem2mem.c | 2 -
236 drivers/staging/media/Kconfig | 2 +
237 drivers/staging/media/Makefile | 2 +-
238 drivers/staging/media/rpivid/Kconfig | 16 +
239 drivers/staging/media/rpivid/Makefile | 5 +
240 drivers/staging/media/rpivid/rpivid.c | 459 ++++
241 drivers/staging/media/rpivid/rpivid.h | 203 ++
242 drivers/staging/media/rpivid/rpivid_dec.c | 96 +
243 drivers/staging/media/rpivid/rpivid_dec.h | 19 +
244 drivers/staging/media/rpivid/rpivid_h265.c | 2698 +++++++++++++++++++
245 drivers/staging/media/rpivid/rpivid_hw.c | 383 +++
246 drivers/staging/media/rpivid/rpivid_hw.h | 303 +++
247 drivers/staging/media/rpivid/rpivid_video.c | 696 +++++
248 drivers/staging/media/rpivid/rpivid_video.h | 33 +
249 14 files changed, 4914 insertions(+), 3 deletions(-)
250 create mode 100644 drivers/staging/media/rpivid/Kconfig
251 create mode 100644 drivers/staging/media/rpivid/Makefile
252 create mode 100644 drivers/staging/media/rpivid/rpivid.c
253 create mode 100644 drivers/staging/media/rpivid/rpivid.h
254 create mode 100644 drivers/staging/media/rpivid/rpivid_dec.c
255 create mode 100644 drivers/staging/media/rpivid/rpivid_dec.h
256 create mode 100644 drivers/staging/media/rpivid/rpivid_h265.c
257 create mode 100644 drivers/staging/media/rpivid/rpivid_hw.c
258 create mode 100644 drivers/staging/media/rpivid/rpivid_hw.h
259 create mode 100644 drivers/staging/media/rpivid/rpivid_video.c
260 create mode 100644 drivers/staging/media/rpivid/rpivid_video.h
262 --- a/drivers/media/v4l2-core/v4l2-mem2mem.c
263 +++ b/drivers/media/v4l2-core/v4l2-mem2mem.c
264 @@ -492,8 +492,6 @@ void v4l2_m2m_job_finish(struct v4l2_m2m
265 * holding capture buffers. Those should use
266 * v4l2_m2m_buf_done_and_job_finish() instead.
268 - WARN_ON(m2m_ctx->out_q_ctx.q.subsystem_flags &
269 - VB2_V4L2_FL_SUPPORTS_M2M_HOLD_CAPTURE_BUF);
270 spin_lock_irqsave(&m2m_dev->job_spinlock, flags);
271 schedule_next = _v4l2_m2m_job_finish(m2m_dev, m2m_ctx);
272 spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags);
273 --- a/drivers/staging/media/Kconfig
274 +++ b/drivers/staging/media/Kconfig
275 @@ -34,6 +34,8 @@ source "drivers/staging/media/omap4iss/K
277 source "drivers/staging/media/rkvdec/Kconfig"
279 +source "drivers/staging/media/rpivid/Kconfig"
281 source "drivers/staging/media/sunxi/Kconfig"
283 source "drivers/staging/media/tegra-video/Kconfig"
284 --- a/drivers/staging/media/Makefile
285 +++ b/drivers/staging/media/Makefile
286 @@ -7,7 +7,7 @@ obj-$(CONFIG_VIDEO_MESON_VDEC) += meson/
287 obj-$(CONFIG_VIDEO_MEYE) += deprecated/meye/
288 obj-$(CONFIG_VIDEO_OMAP4) += omap4iss/
289 obj-$(CONFIG_VIDEO_ROCKCHIP_VDEC) += rkvdec/
290 -obj-$(CONFIG_VIDEO_STKWEBCAM) += deprecated/stkwebcam/
291 +obj-$(CONFIG_VIDEO_RPIVID) += rpivid/
292 obj-$(CONFIG_VIDEO_SUNXI) += sunxi/
293 obj-$(CONFIG_VIDEO_TEGRA) += tegra-video/
294 obj-$(CONFIG_VIDEO_IPU3_IMGU) += ipu3/
296 +++ b/drivers/staging/media/rpivid/Kconfig
298 +# SPDX-License-Identifier: GPL-2.0
301 + tristate "Rpi H265 driver"
302 + depends on VIDEO_DEV && VIDEO_DEV
304 + select MEDIA_CONTROLLER
305 + select MEDIA_CONTROLLER_REQUEST_API
306 + select VIDEOBUF2_DMA_CONTIG
307 + select V4L2_MEM2MEM_DEV
309 + Support for the Rpi H265 h/w decoder.
311 + To compile this driver as a module, choose M here: the module
312 + will be called rpivid-hevc.
315 +++ b/drivers/staging/media/rpivid/Makefile
317 +# SPDX-License-Identifier: GPL-2.0
318 +obj-$(CONFIG_VIDEO_RPIVID) += rpivid-hevc.o
320 +rpivid-hevc-y = rpivid.o rpivid_video.o rpivid_dec.o \
321 + rpivid_hw.o rpivid_h265.o
323 +++ b/drivers/staging/media/rpivid/rpivid.c
325 +// SPDX-License-Identifier: GPL-2.0
327 + * Raspberry Pi HEVC driver
329 + * Copyright (C) 2020 Raspberry Pi (Trading) Ltd
331 + * Based on the Cedrus VPU driver, that is:
333 + * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
334 + * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
335 + * Copyright (C) 2018 Bootlin
338 +#include <linux/platform_device.h>
339 +#include <linux/module.h>
340 +#include <linux/of.h>
342 +#include <media/v4l2-device.h>
343 +#include <media/v4l2-ioctl.h>
344 +#include <media/v4l2-ctrls.h>
345 +#include <media/v4l2-mem2mem.h>
348 +#include "rpivid_video.h"
349 +#include "rpivid_hw.h"
350 +#include "rpivid_dec.h"
353 + * Default /dev/videoN node number.
354 + * Deliberately avoid the very low numbers as these are often taken by webcams
355 + * etc, and simple apps tend to only go for /dev/video0.
357 +static int video_nr = 19;
358 +module_param(video_nr, int, 0644);
359 +MODULE_PARM_DESC(video_nr, "decoder video device number");
361 +static const struct rpivid_control rpivid_ctrls[] = {
364 + .id = V4L2_CID_STATELESS_HEVC_SPS,
365 + .ops = &rpivid_hevc_sps_ctrl_ops,
371 + .id = V4L2_CID_STATELESS_HEVC_PPS,
372 + .ops = &rpivid_hevc_pps_ctrl_ops,
378 + .id = V4L2_CID_STATELESS_HEVC_SCALING_MATRIX,
384 + .id = V4L2_CID_STATELESS_HEVC_DECODE_PARAMS,
390 + .name = "Slice param array",
391 + .id = V4L2_CID_STATELESS_HEVC_SLICE_PARAMS,
392 + .type = V4L2_CTRL_TYPE_HEVC_SLICE_PARAMS,
393 + .flags = V4L2_CTRL_FLAG_DYNAMIC_ARRAY,
394 + .dims = { 0x1000 },
400 + .id = V4L2_CID_STATELESS_HEVC_DECODE_MODE,
401 + .min = V4L2_STATELESS_HEVC_DECODE_MODE_SLICE_BASED,
402 + .max = V4L2_STATELESS_HEVC_DECODE_MODE_SLICE_BASED,
403 + .def = V4L2_STATELESS_HEVC_DECODE_MODE_SLICE_BASED,
409 + .id = V4L2_CID_STATELESS_HEVC_START_CODE,
410 + .min = V4L2_STATELESS_HEVC_START_CODE_NONE,
411 + .max = V4L2_STATELESS_HEVC_START_CODE_ANNEX_B,
412 + .def = V4L2_STATELESS_HEVC_START_CODE_NONE,
418 +#define rpivid_ctrls_COUNT ARRAY_SIZE(rpivid_ctrls)
420 +struct v4l2_ctrl *rpivid_find_ctrl(struct rpivid_ctx *ctx, u32 id)
424 + for (i = 0; ctx->ctrls[i]; i++)
425 + if (ctx->ctrls[i]->id == id)
426 + return ctx->ctrls[i];
431 +void *rpivid_find_control_data(struct rpivid_ctx *ctx, u32 id)
433 + struct v4l2_ctrl *const ctrl = rpivid_find_ctrl(ctx, id);
435 + return !ctrl ? NULL : ctrl->p_cur.p;
438 +static int rpivid_init_ctrls(struct rpivid_dev *dev, struct rpivid_ctx *ctx)
440 + struct v4l2_ctrl_handler *hdl = &ctx->hdl;
441 + struct v4l2_ctrl *ctrl;
442 + unsigned int ctrl_size;
445 + v4l2_ctrl_handler_init(hdl, rpivid_ctrls_COUNT);
447 + v4l2_err(&dev->v4l2_dev,
448 + "Failed to initialize control handler\n");
452 + ctrl_size = sizeof(ctrl) * rpivid_ctrls_COUNT + 1;
454 + ctx->ctrls = kzalloc(ctrl_size, GFP_KERNEL);
458 + for (i = 0; i < rpivid_ctrls_COUNT; i++) {
459 + ctrl = v4l2_ctrl_new_custom(hdl, &rpivid_ctrls[i].cfg,
462 + v4l2_err(&dev->v4l2_dev,
463 + "Failed to create new custom control id=%#x\n",
464 + rpivid_ctrls[i].cfg.id);
466 + v4l2_ctrl_handler_free(hdl);
471 + ctx->ctrls[i] = ctrl;
474 + ctx->fh.ctrl_handler = hdl;
475 + v4l2_ctrl_handler_setup(hdl);
480 +static int rpivid_request_validate(struct media_request *req)
482 + struct media_request_object *obj;
483 + struct v4l2_ctrl_handler *parent_hdl, *hdl;
484 + struct rpivid_ctx *ctx = NULL;
485 + struct v4l2_ctrl *ctrl_test;
486 + unsigned int count;
489 + list_for_each_entry(obj, &req->objects, list) {
490 + struct vb2_buffer *vb;
492 + if (vb2_request_object_is_buffer(obj)) {
493 + vb = container_of(obj, struct vb2_buffer, req_obj);
494 + ctx = vb2_get_drv_priv(vb->vb2_queue);
503 + count = vb2_request_buffer_cnt(req);
505 + v4l2_info(&ctx->dev->v4l2_dev,
506 + "No buffer was provided with the request\n");
508 + } else if (count > 1) {
509 + v4l2_info(&ctx->dev->v4l2_dev,
510 + "More than one buffer was provided with the request\n");
514 + parent_hdl = &ctx->hdl;
516 + hdl = v4l2_ctrl_request_hdl_find(req, parent_hdl);
518 + v4l2_info(&ctx->dev->v4l2_dev, "Missing codec control(s)\n");
522 + for (i = 0; i < rpivid_ctrls_COUNT; i++) {
523 + if (!rpivid_ctrls[i].required)
527 + v4l2_ctrl_request_hdl_ctrl_find(hdl,
528 + rpivid_ctrls[i].cfg.id);
530 + v4l2_info(&ctx->dev->v4l2_dev,
531 + "Missing required codec control\n");
532 + v4l2_ctrl_request_hdl_put(hdl);
537 + v4l2_ctrl_request_hdl_put(hdl);
539 + return vb2_request_validate(req);
542 +static int rpivid_open(struct file *file)
544 + struct rpivid_dev *dev = video_drvdata(file);
545 + struct rpivid_ctx *ctx = NULL;
548 + if (mutex_lock_interruptible(&dev->dev_mutex))
549 + return -ERESTARTSYS;
551 + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
553 + mutex_unlock(&dev->dev_mutex);
558 + mutex_init(&ctx->ctx_mutex);
560 + v4l2_fh_init(&ctx->fh, video_devdata(file));
561 + file->private_data = &ctx->fh;
564 + ret = rpivid_init_ctrls(dev, ctx);
568 + ctx->fh.m2m_ctx = v4l2_m2m_ctx_init(dev->m2m_dev, ctx,
569 + &rpivid_queue_init);
570 + if (IS_ERR(ctx->fh.m2m_ctx)) {
571 + ret = PTR_ERR(ctx->fh.m2m_ctx);
575 + /* The only bit of format info that we can guess now is H265 src
576 + * Everything else we need more info for
578 + rpivid_prepare_src_format(&ctx->src_fmt);
580 + v4l2_fh_add(&ctx->fh);
582 + mutex_unlock(&dev->dev_mutex);
587 + v4l2_ctrl_handler_free(&ctx->hdl);
589 + mutex_destroy(&ctx->ctx_mutex);
592 + mutex_unlock(&dev->dev_mutex);
597 +static int rpivid_release(struct file *file)
599 + struct rpivid_dev *dev = video_drvdata(file);
600 + struct rpivid_ctx *ctx = container_of(file->private_data,
601 + struct rpivid_ctx, fh);
603 + mutex_lock(&dev->dev_mutex);
605 + v4l2_fh_del(&ctx->fh);
606 + v4l2_m2m_ctx_release(ctx->fh.m2m_ctx);
608 + v4l2_ctrl_handler_free(&ctx->hdl);
611 + v4l2_fh_exit(&ctx->fh);
612 + mutex_destroy(&ctx->ctx_mutex);
616 + mutex_unlock(&dev->dev_mutex);
621 +static const struct v4l2_file_operations rpivid_fops = {
622 + .owner = THIS_MODULE,
623 + .open = rpivid_open,
624 + .release = rpivid_release,
625 + .poll = v4l2_m2m_fop_poll,
626 + .unlocked_ioctl = video_ioctl2,
627 + .mmap = v4l2_m2m_fop_mmap,
630 +static const struct video_device rpivid_video_device = {
631 + .name = RPIVID_NAME,
632 + .vfl_dir = VFL_DIR_M2M,
633 + .fops = &rpivid_fops,
634 + .ioctl_ops = &rpivid_ioctl_ops,
636 + .release = video_device_release_empty,
637 + .device_caps = V4L2_CAP_VIDEO_M2M_MPLANE | V4L2_CAP_STREAMING,
640 +static const struct v4l2_m2m_ops rpivid_m2m_ops = {
641 + .device_run = rpivid_device_run,
644 +static const struct media_device_ops rpivid_m2m_media_ops = {
645 + .req_validate = rpivid_request_validate,
646 + .req_queue = v4l2_m2m_request_queue,
649 +static int rpivid_probe(struct platform_device *pdev)
651 + struct rpivid_dev *dev;
652 + struct video_device *vfd;
655 + dev = devm_kzalloc(&pdev->dev, sizeof(*dev), GFP_KERNEL);
659 + dev->vfd = rpivid_video_device;
660 + dev->dev = &pdev->dev;
664 + ret = rpivid_hw_probe(dev);
666 + dev_err(&pdev->dev, "Failed to probe hardware\n");
670 + dev->dec_ops = &rpivid_dec_ops_h265;
672 + mutex_init(&dev->dev_mutex);
674 + ret = v4l2_device_register(&pdev->dev, &dev->v4l2_dev);
676 + dev_err(&pdev->dev, "Failed to register V4L2 device\n");
681 + vfd->lock = &dev->dev_mutex;
682 + vfd->v4l2_dev = &dev->v4l2_dev;
684 + snprintf(vfd->name, sizeof(vfd->name), "%s", rpivid_video_device.name);
685 + video_set_drvdata(vfd, dev);
687 + dev->m2m_dev = v4l2_m2m_init(&rpivid_m2m_ops);
688 + if (IS_ERR(dev->m2m_dev)) {
689 + v4l2_err(&dev->v4l2_dev,
690 + "Failed to initialize V4L2 M2M device\n");
691 + ret = PTR_ERR(dev->m2m_dev);
696 + dev->mdev.dev = &pdev->dev;
697 + strscpy(dev->mdev.model, RPIVID_NAME, sizeof(dev->mdev.model));
698 + strscpy(dev->mdev.bus_info, "platform:" RPIVID_NAME,
699 + sizeof(dev->mdev.bus_info));
701 + media_device_init(&dev->mdev);
702 + dev->mdev.ops = &rpivid_m2m_media_ops;
703 + dev->v4l2_dev.mdev = &dev->mdev;
705 + ret = video_register_device(vfd, VFL_TYPE_VIDEO, video_nr);
707 + v4l2_err(&dev->v4l2_dev, "Failed to register video device\n");
711 + v4l2_info(&dev->v4l2_dev,
712 + "Device registered as /dev/video%d\n", vfd->num);
714 + ret = v4l2_m2m_register_media_controller(dev->m2m_dev, vfd,
715 + MEDIA_ENT_F_PROC_VIDEO_DECODER);
717 + v4l2_err(&dev->v4l2_dev,
718 + "Failed to initialize V4L2 M2M media controller\n");
722 + ret = media_device_register(&dev->mdev);
724 + v4l2_err(&dev->v4l2_dev, "Failed to register media device\n");
728 + platform_set_drvdata(pdev, dev);
733 + v4l2_m2m_unregister_media_controller(dev->m2m_dev);
735 + video_unregister_device(&dev->vfd);
737 + v4l2_m2m_release(dev->m2m_dev);
739 + v4l2_device_unregister(&dev->v4l2_dev);
744 +static int rpivid_remove(struct platform_device *pdev)
746 + struct rpivid_dev *dev = platform_get_drvdata(pdev);
748 + if (media_devnode_is_registered(dev->mdev.devnode)) {
749 + media_device_unregister(&dev->mdev);
750 + v4l2_m2m_unregister_media_controller(dev->m2m_dev);
751 + media_device_cleanup(&dev->mdev);
754 + v4l2_m2m_release(dev->m2m_dev);
755 + video_unregister_device(&dev->vfd);
756 + v4l2_device_unregister(&dev->v4l2_dev);
758 + rpivid_hw_remove(dev);
763 +static const struct of_device_id rpivid_dt_match[] = {
765 + .compatible = "raspberrypi,rpivid-vid-decoder",
769 +MODULE_DEVICE_TABLE(of, rpivid_dt_match);
771 +static struct platform_driver rpivid_driver = {
772 + .probe = rpivid_probe,
773 + .remove = rpivid_remove,
775 + .name = RPIVID_NAME,
776 + .of_match_table = of_match_ptr(rpivid_dt_match),
779 +module_platform_driver(rpivid_driver);
781 +MODULE_LICENSE("GPL v2");
782 +MODULE_AUTHOR("John Cox <jc@kynesim.co.uk>");
783 +MODULE_DESCRIPTION("Raspberry Pi HEVC V4L2 driver");
785 +++ b/drivers/staging/media/rpivid/rpivid.h
787 +/* SPDX-License-Identifier: GPL-2.0 */
789 + * Raspberry Pi HEVC driver
791 + * Copyright (C) 2020 Raspberry Pi (Trading) Ltd
793 + * Based on the Cedrus VPU driver, that is:
795 + * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
796 + * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
797 + * Copyright (C) 2018 Bootlin
803 +#include <linux/clk.h>
804 +#include <linux/platform_device.h>
805 +#include <media/v4l2-ctrls.h>
806 +#include <media/v4l2-device.h>
807 +#include <media/v4l2-mem2mem.h>
808 +#include <media/videobuf2-v4l2.h>
809 +#include <media/videobuf2-dma-contig.h>
811 +#define OPT_DEBUG_POLL_IRQ 0
813 +#define RPIVID_DEC_ENV_COUNT 6
814 +#define RPIVID_P1BUF_COUNT 3
815 +#define RPIVID_P2BUF_COUNT 3
817 +#define RPIVID_NAME "rpivid"
819 +#define RPIVID_CAPABILITY_UNTILED BIT(0)
820 +#define RPIVID_CAPABILITY_H265_DEC BIT(1)
822 +#define RPIVID_QUIRK_NO_DMA_OFFSET BIT(0)
824 +enum rpivid_irq_status {
830 +struct rpivid_control {
831 + struct v4l2_ctrl_config cfg;
832 + unsigned char required:1;
835 +struct rpivid_h265_run {
837 + const struct v4l2_ctrl_hevc_sps *sps;
838 + const struct v4l2_ctrl_hevc_pps *pps;
839 + const struct v4l2_ctrl_hevc_decode_params *dec;
840 + const struct v4l2_ctrl_hevc_slice_params *slice_params;
841 + const struct v4l2_ctrl_hevc_scaling_matrix *scaling_matrix;
845 + struct vb2_v4l2_buffer *src;
846 + struct vb2_v4l2_buffer *dst;
848 + struct rpivid_h265_run h265;
851 +struct rpivid_buffer {
852 + struct v4l2_m2m_buffer m2m_buf;
855 +struct rpivid_dec_state;
856 +struct rpivid_dec_env;
858 +struct rpivid_gptr {
862 + unsigned long attrs;
866 +typedef void (*rpivid_irq_callback)(struct rpivid_dev *dev, void *ctx);
868 +struct rpivid_q_aux;
869 +#define RPIVID_AUX_ENT_COUNT VB2_MAX_FRAME
873 + struct rpivid_dev *dev;
875 + struct v4l2_pix_format_mplane src_fmt;
876 + struct v4l2_pix_format_mplane dst_fmt;
882 + // fatal_err is set if an error has occurred s.t. decode cannot
883 + // continue (such as running out of CMA)
886 + /* Lock for queue operations */
887 + struct mutex ctx_mutex;
889 + struct v4l2_ctrl_handler hdl;
890 + struct v4l2_ctrl **ctrls;
892 + /* Decode state - stateless decoder my *** */
893 + /* state contains stuff that is only needed in phase0
894 + * it could be held in dec_env but that would be wasteful
896 + struct rpivid_dec_state *state;
897 + struct rpivid_dec_env *dec0;
899 + /* Spinlock protecting dec_free */
900 + spinlock_t dec_lock;
901 + struct rpivid_dec_env *dec_free;
903 + struct rpivid_dec_env *dec_pool;
905 + unsigned int p1idx;
907 + struct rpivid_gptr bitbufs[RPIVID_P1BUF_COUNT];
909 + /* *** Should be in dev *** */
910 + unsigned int p2idx;
911 + struct rpivid_gptr pu_bufs[RPIVID_P2BUF_COUNT];
912 + struct rpivid_gptr coeff_bufs[RPIVID_P2BUF_COUNT];
914 + /* Spinlock protecting aux_free */
915 + spinlock_t aux_lock;
916 + struct rpivid_q_aux *aux_free;
918 + struct rpivid_q_aux *aux_ents[RPIVID_AUX_ENT_COUNT];
920 + unsigned int colmv_stride;
921 + unsigned int colmv_picsize;
924 +struct rpivid_dec_ops {
925 + void (*setup)(struct rpivid_ctx *ctx, struct rpivid_run *run);
926 + int (*start)(struct rpivid_ctx *ctx);
927 + void (*stop)(struct rpivid_ctx *ctx);
928 + void (*trigger)(struct rpivid_ctx *ctx);
931 +struct rpivid_variant {
932 + unsigned int capabilities;
933 + unsigned int quirks;
934 + unsigned int mod_rate;
937 +struct rpivid_hw_irq_ent;
939 +#define RPIVID_ICTL_ENABLE_UNLIMITED (-1)
941 +struct rpivid_hw_irq_ctrl {
942 + /* Spinlock protecting claim and tail */
944 + struct rpivid_hw_irq_ent *claim;
945 + struct rpivid_hw_irq_ent *tail;
947 + /* Ent for pending irq - also prevents sched */
948 + struct rpivid_hw_irq_ent *irq;
949 + /* Non-zero => do not start a new job - outer layer sched pending */
951 + /* Enable count. -1 always OK, 0 do not sched, +ve shed & count down */
953 + /* Thread CB requested */
958 + struct v4l2_device v4l2_dev;
959 + struct video_device vfd;
960 + struct media_device mdev;
961 + struct media_pad pad[2];
962 + struct platform_device *pdev;
963 + struct device *dev;
964 + struct v4l2_m2m_dev *m2m_dev;
965 + const struct rpivid_dec_ops *dec_ops;
967 + /* Device file mutex */
968 + struct mutex dev_mutex;
970 + void __iomem *base_irq;
971 + void __iomem *base_h265;
974 + unsigned long max_clock_rate;
978 + struct rpivid_hw_irq_ctrl ic_active1;
979 + struct rpivid_hw_irq_ctrl ic_active2;
982 +extern const struct rpivid_dec_ops rpivid_dec_ops_h265;
983 +extern const struct v4l2_ctrl_ops rpivid_hevc_sps_ctrl_ops;
984 +extern const struct v4l2_ctrl_ops rpivid_hevc_pps_ctrl_ops;
986 +struct v4l2_ctrl *rpivid_find_ctrl(struct rpivid_ctx *ctx, u32 id);
987 +void *rpivid_find_control_data(struct rpivid_ctx *ctx, u32 id);
991 +++ b/drivers/staging/media/rpivid/rpivid_dec.c
993 +// SPDX-License-Identifier: GPL-2.0
995 + * Raspberry Pi HEVC driver
997 + * Copyright (C) 2020 Raspberry Pi (Trading) Ltd
999 + * Based on the Cedrus VPU driver, that is:
1001 + * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
1002 + * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
1003 + * Copyright (C) 2018 Bootlin
1006 +#include <media/v4l2-device.h>
1007 +#include <media/v4l2-ioctl.h>
1008 +#include <media/v4l2-event.h>
1009 +#include <media/v4l2-mem2mem.h>
1011 +#include "rpivid.h"
1012 +#include "rpivid_dec.h"
1014 +void rpivid_device_run(void *priv)
1016 + struct rpivid_ctx *const ctx = priv;
1017 + struct rpivid_dev *const dev = ctx->dev;
1018 + struct rpivid_run run = {};
1019 + struct media_request *src_req;
1021 + run.src = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
1022 + run.dst = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
1024 + if (!run.src || !run.dst) {
1025 + v4l2_err(&dev->v4l2_dev, "%s: Missing buffer: src=%p, dst=%p\n",
1026 + __func__, run.src, run.dst);
1030 + /* Apply request(s) controls */
1031 + src_req = run.src->vb2_buf.req_obj.req;
1033 + v4l2_err(&dev->v4l2_dev, "%s: Missing request\n", __func__);
1037 + v4l2_ctrl_request_setup(src_req, &ctx->hdl);
1039 + switch (ctx->src_fmt.pixelformat) {
1040 + case V4L2_PIX_FMT_HEVC_SLICE:
1042 + const struct v4l2_ctrl *ctrl;
1045 + rpivid_find_control_data(ctx,
1046 + V4L2_CID_STATELESS_HEVC_SPS);
1048 + rpivid_find_control_data(ctx,
1049 + V4L2_CID_STATELESS_HEVC_PPS);
1051 + rpivid_find_control_data(ctx,
1052 + V4L2_CID_STATELESS_HEVC_DECODE_PARAMS);
1054 + ctrl = rpivid_find_ctrl(ctx,
1055 + V4L2_CID_STATELESS_HEVC_SLICE_PARAMS);
1056 + if (!ctrl || !ctrl->elems) {
1057 + v4l2_err(&dev->v4l2_dev, "%s: Missing slice params\n",
1061 + run.h265.slice_ents = ctrl->elems;
1062 + run.h265.slice_params = ctrl->p_cur.p;
1064 + run.h265.scaling_matrix =
1065 + rpivid_find_control_data(ctx,
1066 + V4L2_CID_STATELESS_HEVC_SCALING_MATRIX);
1074 + v4l2_m2m_buf_copy_metadata(run.src, run.dst, true);
1076 + dev->dec_ops->setup(ctx, &run);
1078 + /* Complete request(s) controls */
1079 + v4l2_ctrl_request_complete(src_req, &ctx->hdl);
1081 + dev->dec_ops->trigger(ctx);
1085 + /* We really shouldn't get here but tidy up what we can */
1086 + v4l2_m2m_buf_done_and_job_finish(dev->m2m_dev, ctx->fh.m2m_ctx,
1087 + VB2_BUF_STATE_ERROR);
1090 +++ b/drivers/staging/media/rpivid/rpivid_dec.h
1092 +/* SPDX-License-Identifier: GPL-2.0 */
1094 + * Raspberry Pi HEVC driver
1096 + * Copyright (C) 2020 Raspberry Pi (Trading) Ltd
1098 + * Based on the Cedrus VPU driver, that is:
1100 + * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
1101 + * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
1102 + * Copyright (C) 2018 Bootlin
1105 +#ifndef _RPIVID_DEC_H_
1106 +#define _RPIVID_DEC_H_
1108 +void rpivid_device_run(void *priv);
1112 +++ b/drivers/staging/media/rpivid/rpivid_h265.c
1114 +// SPDX-License-Identifier: GPL-2.0-or-later
1116 + * Raspberry Pi HEVC driver
1118 + * Copyright (C) 2020 Raspberry Pi (Trading) Ltd
1120 + * Based on the Cedrus VPU driver, that is:
1122 + * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
1123 + * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
1124 + * Copyright (C) 2018 Bootlin
1127 +#include <linux/delay.h>
1128 +#include <linux/types.h>
1130 +#include <media/videobuf2-dma-contig.h>
1132 +#include "rpivid.h"
1133 +#include "rpivid_hw.h"
1134 +#include "rpivid_video.h"
1136 +#define DEBUG_TRACE_P1_CMD 0
1137 +#define DEBUG_TRACE_EXECUTION 0
1139 +#define USE_REQUEST_PIN 1
1141 +#if DEBUG_TRACE_EXECUTION
1142 +#define xtrace_in(dev_, de_)\
1143 + v4l2_info(&(dev_)->v4l2_dev, "%s[%d]: in\n", __func__,\
1144 + (de_) == NULL ? -1 : (de_)->decode_order)
1145 +#define xtrace_ok(dev_, de_)\
1146 + v4l2_info(&(dev_)->v4l2_dev, "%s[%d]: ok\n", __func__,\
1147 + (de_) == NULL ? -1 : (de_)->decode_order)
1148 +#define xtrace_fin(dev_, de_)\
1149 + v4l2_info(&(dev_)->v4l2_dev, "%s[%d]: finish\n", __func__,\
1150 + (de_) == NULL ? -1 : (de_)->decode_order)
1151 +#define xtrace_fail(dev_, de_)\
1152 + v4l2_info(&(dev_)->v4l2_dev, "%s[%d]: FAIL\n", __func__,\
1153 + (de_) == NULL ? -1 : (de_)->decode_order)
1155 +#define xtrace_in(dev_, de_)
1156 +#define xtrace_ok(dev_, de_)
1157 +#define xtrace_fin(dev_, de_)
1158 +#define xtrace_fail(dev_, de_)
1161 +enum hevc_slice_type {
1167 +enum hevc_layer { L0 = 0, L1 = 1 };
1169 +static int gptr_alloc(struct rpivid_dev *const dev, struct rpivid_gptr *gptr,
1170 + size_t size, unsigned long attrs)
1172 + gptr->size = size;
1173 + gptr->attrs = attrs;
1175 + gptr->ptr = dma_alloc_attrs(dev->dev, gptr->size, &gptr->addr,
1176 + GFP_KERNEL, gptr->attrs);
1177 + return !gptr->ptr ? -ENOMEM : 0;
1180 +static void gptr_free(struct rpivid_dev *const dev,
1181 + struct rpivid_gptr *const gptr)
1184 + dma_free_attrs(dev->dev, gptr->size, gptr->ptr, gptr->addr,
1192 +/* Realloc but do not copy
1194 + * Frees then allocs.
1195 + * If the alloc fails then it attempts to re-allocote the old size
1196 + * On error then check gptr->ptr to determine if anything is currently
1199 +static int gptr_realloc_new(struct rpivid_dev * const dev,
1200 + struct rpivid_gptr * const gptr, size_t size)
1202 + const size_t old_size = gptr->size;
1204 + if (size == gptr->size)
1208 + dma_free_attrs(dev->dev, gptr->size, gptr->ptr,
1209 + gptr->addr, gptr->attrs);
1212 + gptr->size = size;
1213 + gptr->ptr = dma_alloc_attrs(dev->dev, gptr->size,
1214 + &gptr->addr, GFP_KERNEL, gptr->attrs);
1218 + gptr->size = old_size;
1219 + gptr->ptr = dma_alloc_attrs(dev->dev, gptr->size,
1220 + &gptr->addr, GFP_KERNEL, gptr->attrs);
1232 +static size_t next_size(const size_t x)
1234 + return rpivid_round_up_size(x + 1);
1237 +#define NUM_SCALING_FACTORS 4064 /* Not a typo = 0xbe0 + 0x400 */
1239 +#define AXI_BASE64 0
1241 +#define PROB_BACKUP ((20 << 12) + (20 << 6) + (0 << 0))
1242 +#define PROB_RELOAD ((20 << 12) + (20 << 0) + (0 << 6))
1244 +#define HEVC_MAX_REFS V4L2_HEVC_DPB_ENTRIES_NUM_MAX
1246 +//////////////////////////////////////////////////////////////////////////////
1253 +struct rpivid_q_aux {
1254 + unsigned int refcount;
1255 + unsigned int q_index;
1256 + struct rpivid_q_aux *next;
1257 + struct rpivid_gptr col;
1260 +//////////////////////////////////////////////////////////////////////////////
1262 +enum rpivid_decode_state {
1263 + RPIVID_DECODE_SLICE_START,
1264 + RPIVID_DECODE_SLICE_CONTINUE,
1265 + RPIVID_DECODE_ERROR_CONTINUE,
1266 + RPIVID_DECODE_ERROR_DONE,
1267 + RPIVID_DECODE_PHASE1,
1268 + RPIVID_DECODE_END,
1271 +struct rpivid_dec_env {
1272 + struct rpivid_ctx *ctx;
1273 + struct rpivid_dec_env *next;
1275 + enum rpivid_decode_state state;
1276 + unsigned int decode_order;
1277 + int p1_status; /* P1 status - what to realloc */
1279 + struct rpi_cmd *cmd_fifo;
1280 + unsigned int cmd_len, cmd_max;
1281 + unsigned int num_slice_msgs;
1282 + unsigned int pic_width_in_ctbs_y;
1283 + unsigned int pic_height_in_ctbs_y;
1284 + unsigned int dpbno_col;
1285 + u32 reg_slicestart;
1286 + int collocated_from_l0_flag;
1288 + * Last CTB/Tile X,Y processed by (wpp_)entry_point
1289 + * Could be in _state as P0 only but needs updating where _state
1292 + unsigned int entry_ctb_x;
1293 + unsigned int entry_ctb_y;
1294 + unsigned int entry_tile_x;
1295 + unsigned int entry_tile_y;
1296 + unsigned int entry_qp;
1300 + u32 rpi_framesize;
1303 + struct vb2_v4l2_buffer *frame_buf; // Detached dest buffer
1304 + struct vb2_v4l2_buffer *src_buf; // Detached src buffer
1305 + unsigned int frame_c_offset;
1306 + unsigned int frame_stride;
1307 + dma_addr_t frame_addr;
1308 + dma_addr_t ref_addrs[16];
1309 + struct rpivid_q_aux *frame_aux;
1310 + struct rpivid_q_aux *col_aux;
1312 + dma_addr_t cmd_addr;
1315 + dma_addr_t pu_base_vc;
1316 + dma_addr_t coeff_base_vc;
1320 + struct rpivid_gptr *bit_copy_gptr;
1321 + size_t bit_copy_len;
1323 +#define SLICE_MSGS_MAX (2 * HEVC_MAX_REFS * 8 + 3)
1324 + u16 slice_msgs[SLICE_MSGS_MAX];
1325 + u8 scaling_factors[NUM_SCALING_FACTORS];
1327 +#if USE_REQUEST_PIN
1328 + struct media_request *req_pin;
1330 + struct media_request_object *req_obj;
1332 + struct rpivid_hw_irq_ent irq_ent;
1335 +#define member_size(type, member) sizeof(((type *)0)->member)
1337 +struct rpivid_dec_state {
1338 + struct v4l2_ctrl_hevc_sps sps;
1339 + struct v4l2_ctrl_hevc_pps pps;
1341 + // Helper vars & tables derived from sps/pps
1342 + unsigned int log2_ctb_size; /* log2 width of a CTB */
1343 + unsigned int ctb_width; /* Width in CTBs */
1344 + unsigned int ctb_height; /* Height in CTBs */
1345 + unsigned int ctb_size; /* Pic area in CTBs */
1346 + unsigned int tile_width; /* Width in tiles */
1347 + unsigned int tile_height; /* Height in tiles */
1351 + int *ctb_addr_rs_to_ts;
1352 + int *ctb_addr_ts_to_rs;
1354 + // Aux starage for DPB
1356 + struct rpivid_q_aux *ref_aux[HEVC_MAX_REFS];
1357 + struct rpivid_q_aux *frame_aux;
1360 + unsigned int slice_idx;
1361 + bool slice_temporal_mvp; /* Slice flag but constant for frame */
1365 + // Temp vars per run - don't actually need to persist
1367 + dma_addr_t src_addr;
1368 + const struct v4l2_ctrl_hevc_slice_params *sh;
1369 + const struct v4l2_ctrl_hevc_decode_params *dec;
1370 + unsigned int nb_refs[2];
1371 + unsigned int slice_qp;
1372 + unsigned int max_num_merge_cand; // 0 if I-slice
1373 + bool dependent_slice_segment_flag;
1375 + unsigned int start_ts; /* slice_segment_addr -> ts */
1376 + unsigned int start_ctb_x; /* CTB X,Y of start_ts */
1377 + unsigned int start_ctb_y;
1378 + unsigned int prev_ctb_x; /* CTB X,Y of start_ts - 1 */
1379 + unsigned int prev_ctb_y;
1382 +#if !USE_REQUEST_PIN
1383 +static void dst_req_obj_release(struct media_request_object *object)
1388 +static const struct media_request_object_ops dst_req_obj_ops = {
1389 + .release = dst_req_obj_release,
1393 +static inline int clip_int(const int x, const int lo, const int hi)
1395 + return x < lo ? lo : x > hi ? hi : x;
1398 +//////////////////////////////////////////////////////////////////////////////
1399 +// Phase 1 command and bit FIFOs
1401 +#if DEBUG_TRACE_P1_CMD
1405 +static int cmds_check_space(struct rpivid_dec_env *const de, unsigned int n)
1407 + struct rpi_cmd *a;
1408 + unsigned int newmax;
1410 + if (n > 0x100000) {
1411 + v4l2_err(&de->ctx->dev->v4l2_dev,
1412 + "%s: n %u implausible\n", __func__, n);
1416 + if (de->cmd_len + n <= de->cmd_max)
1419 + newmax = roundup_pow_of_two(de->cmd_len + n);
1421 + a = krealloc(de->cmd_fifo, newmax * sizeof(struct rpi_cmd),
1424 + v4l2_err(&de->ctx->dev->v4l2_dev,
1425 + "Failed cmd buffer realloc from %u to %u\n",
1426 + de->cmd_max, newmax);
1429 + v4l2_info(&de->ctx->dev->v4l2_dev,
1430 + "cmd buffer realloc from %u to %u\n", de->cmd_max, newmax);
1433 + de->cmd_max = newmax;
1437 +// ???? u16 addr - put in u32
1438 +static void p1_apb_write(struct rpivid_dec_env *const de, const u16 addr,
1441 + if (de->cmd_len >= de->cmd_max) {
1442 + v4l2_err(&de->ctx->dev->v4l2_dev,
1443 + "%s: Overflow @ %d\n", __func__, de->cmd_len);
1447 + de->cmd_fifo[de->cmd_len].addr = addr;
1448 + de->cmd_fifo[de->cmd_len].data = data;
1450 +#if DEBUG_TRACE_P1_CMD
1451 + if (++p1_z < 256) {
1452 + v4l2_info(&de->ctx->dev->v4l2_dev, "[%02x] %x %x\n",
1453 + de->cmd_len, addr, data);
1459 +static int ctb_to_tile(unsigned int ctb, unsigned int *bd, int num)
1463 + for (i = 1; ctb >= bd[i]; i++)
1464 + ; // bd[] has num+1 elements; bd[0]=0;
1468 +static unsigned int ctb_to_tile_x(const struct rpivid_dec_state *const s,
1469 + const unsigned int ctb_x)
1471 + return ctb_to_tile(ctb_x, s->col_bd, s->tile_width);
1474 +static unsigned int ctb_to_tile_y(const struct rpivid_dec_state *const s,
1475 + const unsigned int ctb_y)
1477 + return ctb_to_tile(ctb_y, s->row_bd, s->tile_height);
1480 +static void aux_q_free(struct rpivid_ctx *const ctx,
1481 + struct rpivid_q_aux *const aq)
1483 + struct rpivid_dev *const dev = ctx->dev;
1485 + gptr_free(dev, &aq->col);
1489 +static struct rpivid_q_aux *aux_q_alloc(struct rpivid_ctx *const ctx,
1490 + const unsigned int q_index)
1492 + struct rpivid_dev *const dev = ctx->dev;
1493 + struct rpivid_q_aux *const aq = kzalloc(sizeof(*aq), GFP_KERNEL);
1498 + if (gptr_alloc(dev, &aq->col, ctx->colmv_picsize,
1499 + DMA_ATTR_FORCE_CONTIGUOUS | DMA_ATTR_NO_KERNEL_MAPPING))
1503 + * Spinlock not required as called in P0 only and
1504 + * aux checks done by _new
1507 + aq->q_index = q_index;
1508 + ctx->aux_ents[q_index] = aq;
1516 +static struct rpivid_q_aux *aux_q_new(struct rpivid_ctx *const ctx,
1517 + const unsigned int q_index)
1519 + struct rpivid_q_aux *aq;
1520 + unsigned long lockflags;
1522 + spin_lock_irqsave(&ctx->aux_lock, lockflags);
1524 + * If we already have this allocated to a slot then use that
1525 + * and assume that it will all work itself out in the pipeline
1527 + if ((aq = ctx->aux_ents[q_index]) != NULL) {
1529 + } else if ((aq = ctx->aux_free) != NULL) {
1530 + ctx->aux_free = aq->next;
1533 + aq->q_index = q_index;
1534 + ctx->aux_ents[q_index] = aq;
1536 + spin_unlock_irqrestore(&ctx->aux_lock, lockflags);
1539 + aq = aux_q_alloc(ctx, q_index);
1544 +static struct rpivid_q_aux *aux_q_ref_idx(struct rpivid_ctx *const ctx,
1545 + const int q_index)
1547 + unsigned long lockflags;
1548 + struct rpivid_q_aux *aq;
1550 + spin_lock_irqsave(&ctx->aux_lock, lockflags);
1551 + if ((aq = ctx->aux_ents[q_index]) != NULL)
1553 + spin_unlock_irqrestore(&ctx->aux_lock, lockflags);
1558 +static struct rpivid_q_aux *aux_q_ref(struct rpivid_ctx *const ctx,
1559 + struct rpivid_q_aux *const aq)
1562 + unsigned long lockflags;
1564 + spin_lock_irqsave(&ctx->aux_lock, lockflags);
1568 + spin_unlock_irqrestore(&ctx->aux_lock, lockflags);
1573 +static void aux_q_release(struct rpivid_ctx *const ctx,
1574 + struct rpivid_q_aux **const paq)
1576 + struct rpivid_q_aux *const aq = *paq;
1577 + unsigned long lockflags;
1584 + spin_lock_irqsave(&ctx->aux_lock, lockflags);
1585 + if (--aq->refcount == 0) {
1586 + aq->next = ctx->aux_free;
1587 + ctx->aux_free = aq;
1588 + ctx->aux_ents[aq->q_index] = NULL;
1589 + aq->q_index = ~0U;
1591 + spin_unlock_irqrestore(&ctx->aux_lock, lockflags);
1594 +static void aux_q_init(struct rpivid_ctx *const ctx)
1596 + spin_lock_init(&ctx->aux_lock);
1597 + ctx->aux_free = NULL;
1600 +static void aux_q_uninit(struct rpivid_ctx *const ctx)
1602 + struct rpivid_q_aux *aq;
1604 + ctx->colmv_picsize = 0;
1605 + ctx->colmv_stride = 0;
1606 + while ((aq = ctx->aux_free) != NULL) {
1607 + ctx->aux_free = aq->next;
1608 + aux_q_free(ctx, aq);
1612 +//////////////////////////////////////////////////////////////////////////////
1615 + * Initialisation process for context variables (CABAC init)
1616 + * see H.265 9.3.2.2
1618 + * N.B. If comparing with FFmpeg note that this h/w uses slightly different
1619 + * offsets to FFmpegs array
1622 +/* Actual number of values */
1623 +#define RPI_PROB_VALS 154U
1624 +/* Rounded up as we copy words */
1625 +#define RPI_PROB_ARRAY_SIZE ((154 + 3) & ~3)
1627 +/* Initialiser values - see tables H.265 9-4 through 9-42 */
1628 +static const u8 prob_init[3][156] = {
1630 + 153, 200, 139, 141, 157, 154, 154, 154, 154, 154, 184, 154, 154,
1631 + 154, 184, 63, 154, 154, 154, 154, 154, 154, 154, 154, 154, 154,
1632 + 154, 154, 154, 153, 138, 138, 111, 141, 94, 138, 182, 154, 154,
1633 + 154, 140, 92, 137, 138, 140, 152, 138, 139, 153, 74, 149, 92,
1634 + 139, 107, 122, 152, 140, 179, 166, 182, 140, 227, 122, 197, 110,
1635 + 110, 124, 125, 140, 153, 125, 127, 140, 109, 111, 143, 127, 111,
1636 + 79, 108, 123, 63, 110, 110, 124, 125, 140, 153, 125, 127, 140,
1637 + 109, 111, 143, 127, 111, 79, 108, 123, 63, 91, 171, 134, 141,
1638 + 138, 153, 136, 167, 152, 152, 139, 139, 111, 111, 125, 110, 110,
1639 + 94, 124, 108, 124, 107, 125, 141, 179, 153, 125, 107, 125, 141,
1640 + 179, 153, 125, 107, 125, 141, 179, 153, 125, 140, 139, 182, 182,
1641 + 152, 136, 152, 136, 153, 136, 139, 111, 136, 139, 111, 0, 0,
1644 + 153, 185, 107, 139, 126, 197, 185, 201, 154, 149, 154, 139, 154,
1645 + 154, 154, 152, 110, 122, 95, 79, 63, 31, 31, 153, 153, 168,
1646 + 140, 198, 79, 124, 138, 94, 153, 111, 149, 107, 167, 154, 154,
1647 + 154, 154, 196, 196, 167, 154, 152, 167, 182, 182, 134, 149, 136,
1648 + 153, 121, 136, 137, 169, 194, 166, 167, 154, 167, 137, 182, 125,
1649 + 110, 94, 110, 95, 79, 125, 111, 110, 78, 110, 111, 111, 95,
1650 + 94, 108, 123, 108, 125, 110, 94, 110, 95, 79, 125, 111, 110,
1651 + 78, 110, 111, 111, 95, 94, 108, 123, 108, 121, 140, 61, 154,
1652 + 107, 167, 91, 122, 107, 167, 139, 139, 155, 154, 139, 153, 139,
1653 + 123, 123, 63, 153, 166, 183, 140, 136, 153, 154, 166, 183, 140,
1654 + 136, 153, 154, 166, 183, 140, 136, 153, 154, 170, 153, 123, 123,
1655 + 107, 121, 107, 121, 167, 151, 183, 140, 151, 183, 140, 0, 0,
1658 + 153, 160, 107, 139, 126, 197, 185, 201, 154, 134, 154, 139, 154,
1659 + 154, 183, 152, 154, 137, 95, 79, 63, 31, 31, 153, 153, 168,
1660 + 169, 198, 79, 224, 167, 122, 153, 111, 149, 92, 167, 154, 154,
1661 + 154, 154, 196, 167, 167, 154, 152, 167, 182, 182, 134, 149, 136,
1662 + 153, 121, 136, 122, 169, 208, 166, 167, 154, 152, 167, 182, 125,
1663 + 110, 124, 110, 95, 94, 125, 111, 111, 79, 125, 126, 111, 111,
1664 + 79, 108, 123, 93, 125, 110, 124, 110, 95, 94, 125, 111, 111,
1665 + 79, 125, 126, 111, 111, 79, 108, 123, 93, 121, 140, 61, 154,
1666 + 107, 167, 91, 107, 107, 167, 139, 139, 170, 154, 139, 153, 139,
1667 + 123, 123, 63, 124, 166, 183, 140, 136, 153, 154, 166, 183, 140,
1668 + 136, 153, 154, 166, 183, 140, 136, 153, 154, 170, 153, 138, 138,
1669 + 122, 121, 122, 121, 167, 151, 183, 140, 151, 183, 140, 0, 0,
1673 +#define CMDS_WRITE_PROB ((RPI_PROB_ARRAY_SIZE / 4) + 1)
1674 +static void write_prob(struct rpivid_dec_env *const de,
1675 + const struct rpivid_dec_state *const s)
1677 + u8 dst[RPI_PROB_ARRAY_SIZE];
1679 + const unsigned int init_type =
1680 + ((s->sh->flags & V4L2_HEVC_SLICE_PARAMS_FLAG_CABAC_INIT) != 0 &&
1681 + s->sh->slice_type != HEVC_SLICE_I) ?
1682 + s->sh->slice_type + 1 :
1683 + 2 - s->sh->slice_type;
1684 + const u8 *p = prob_init[init_type];
1685 + const int q = clip_int(s->slice_qp, 0, 51);
1688 + for (i = 0; i < RPI_PROB_VALS; i++) {
1689 + int init_value = p[i];
1690 + int m = (init_value >> 4) * 5 - 45;
1691 + int n = ((init_value & 15) << 3) - 16;
1692 + int pre = 2 * (((m * q) >> 4) + n) - 127;
1696 + pre = 124 + (pre & 1);
1699 + for (i = RPI_PROB_VALS; i != RPI_PROB_ARRAY_SIZE; ++i)
1702 + for (i = 0; i < RPI_PROB_ARRAY_SIZE; i += 4)
1703 + p1_apb_write(de, 0x1000 + i,
1704 + dst[i] + (dst[i + 1] << 8) + (dst[i + 2] << 16) +
1705 + (dst[i + 3] << 24));
1708 + * Having written the prob array back it up
1709 + * This is not always needed but is a small overhead that simplifies
1710 + * (and speeds up) some multi-tile & WPP scenarios
1711 + * There are no scenarios where having written a prob we ever want
1712 + * a previous (non-initial) state back
1714 + p1_apb_write(de, RPI_TRANSFER, PROB_BACKUP);
1717 +#define CMDS_WRITE_SCALING_FACTORS NUM_SCALING_FACTORS
1718 +static void write_scaling_factors(struct rpivid_dec_env *const de)
1721 + const u8 *p = (u8 *)de->scaling_factors;
1723 + for (i = 0; i < NUM_SCALING_FACTORS; i += 4, p += 4)
1724 + p1_apb_write(de, 0x2000 + i,
1725 + p[0] + (p[1] << 8) + (p[2] << 16) + (p[3] << 24));
1728 +static inline __u32 dma_to_axi_addr(dma_addr_t a)
1730 + return (__u32)(a >> 6);
1733 +#define CMDS_WRITE_BITSTREAM 4
1734 +static int write_bitstream(struct rpivid_dec_env *const de,
1735 + const struct rpivid_dec_state *const s)
1737 + // Note that FFmpeg V4L2 does not remove emulation prevention bytes,
1738 + // so this is matched in the configuration here.
1739 + // Whether that is the correct behaviour or not is not clear in the
1741 + const int rpi_use_emu = 1;
1742 + unsigned int offset = s->sh->data_byte_offset;
1743 + const unsigned int len = (s->sh->bit_size + 7) / 8 - offset;
1746 + if (s->src_addr != 0) {
1747 + addr = s->src_addr + offset;
1749 + if (len + de->bit_copy_len > de->bit_copy_gptr->size) {
1750 + v4l2_warn(&de->ctx->dev->v4l2_dev,
1751 + "Bit copy buffer overflow: size=%zu, offset=%zu, len=%u\n",
1752 + de->bit_copy_gptr->size,
1753 + de->bit_copy_len, len);
1756 + memcpy(de->bit_copy_gptr->ptr + de->bit_copy_len,
1757 + s->src_buf + offset, len);
1758 + addr = de->bit_copy_gptr->addr + de->bit_copy_len;
1759 + de->bit_copy_len += (len + 63) & ~63;
1761 + offset = addr & 63;
1763 + p1_apb_write(de, RPI_BFBASE, dma_to_axi_addr(addr));
1764 + p1_apb_write(de, RPI_BFNUM, len);
1765 + p1_apb_write(de, RPI_BFCONTROL, offset + (1 << 7)); // Stop
1766 + p1_apb_write(de, RPI_BFCONTROL, offset + (rpi_use_emu << 6));
1770 +//////////////////////////////////////////////////////////////////////////////
1773 + * The slice constant part of the slice register - width and height need to
1774 + * be ORed in later as they are per-tile / WPP-row
1776 +static u32 slice_reg_const(const struct rpivid_dec_state *const s)
1778 + u32 x = (s->max_num_merge_cand << 0) |
1779 + (s->nb_refs[L0] << 4) |
1780 + (s->nb_refs[L1] << 8) |
1781 + (s->sh->slice_type << 12);
1783 + if (s->sh->flags & V4L2_HEVC_SLICE_PARAMS_FLAG_SLICE_SAO_LUMA)
1785 + if (s->sh->flags & V4L2_HEVC_SLICE_PARAMS_FLAG_SLICE_SAO_CHROMA)
1787 + if (s->sh->slice_type == HEVC_SLICE_B &&
1788 + (s->sh->flags & V4L2_HEVC_SLICE_PARAMS_FLAG_MVD_L1_ZERO))
1794 +//////////////////////////////////////////////////////////////////////////////
1796 +#define CMDS_NEW_SLICE_SEGMENT (4 + CMDS_WRITE_SCALING_FACTORS)
1797 +static void new_slice_segment(struct rpivid_dec_env *const de,
1798 + const struct rpivid_dec_state *const s)
1800 + const struct v4l2_ctrl_hevc_sps *const sps = &s->sps;
1801 + const struct v4l2_ctrl_hevc_pps *const pps = &s->pps;
1805 + ((sps->log2_min_luma_coding_block_size_minus3 + 3) << 0) |
1806 + (s->log2_ctb_size << 4) |
1807 + ((sps->log2_min_luma_transform_block_size_minus2 + 2)
1809 + ((sps->log2_min_luma_transform_block_size_minus2 + 2 +
1810 + sps->log2_diff_max_min_luma_transform_block_size)
1812 + ((sps->bit_depth_luma_minus8 + 8) << 16) |
1813 + ((sps->bit_depth_chroma_minus8 + 8) << 20) |
1814 + (sps->max_transform_hierarchy_depth_intra << 24) |
1815 + (sps->max_transform_hierarchy_depth_inter << 28));
1819 + ((sps->pcm_sample_bit_depth_luma_minus1 + 1) << 0) |
1820 + ((sps->pcm_sample_bit_depth_chroma_minus1 + 1) << 4) |
1821 + ((sps->log2_min_pcm_luma_coding_block_size_minus3 + 3)
1823 + ((sps->log2_min_pcm_luma_coding_block_size_minus3 + 3 +
1824 + sps->log2_diff_max_min_pcm_luma_coding_block_size)
1826 + (((sps->flags & V4L2_HEVC_SPS_FLAG_SEPARATE_COLOUR_PLANE) ?
1827 + 0 : sps->chroma_format_idc) << 16) |
1828 + ((!!(sps->flags & V4L2_HEVC_SPS_FLAG_AMP_ENABLED)) << 18) |
1829 + ((!!(sps->flags & V4L2_HEVC_SPS_FLAG_PCM_ENABLED)) << 19) |
1830 + ((!!(sps->flags & V4L2_HEVC_SPS_FLAG_SCALING_LIST_ENABLED))
1833 + V4L2_HEVC_SPS_FLAG_STRONG_INTRA_SMOOTHING_ENABLED))
1838 + ((s->log2_ctb_size - pps->diff_cu_qp_delta_depth) << 0) |
1839 + ((!!(pps->flags & V4L2_HEVC_PPS_FLAG_CU_QP_DELTA_ENABLED))
1842 + V4L2_HEVC_PPS_FLAG_TRANSQUANT_BYPASS_ENABLED))
1844 + ((!!(pps->flags & V4L2_HEVC_PPS_FLAG_TRANSFORM_SKIP_ENABLED))
1847 + V4L2_HEVC_PPS_FLAG_SIGN_DATA_HIDING_ENABLED))
1849 + (((pps->pps_cb_qp_offset + s->sh->slice_cb_qp_offset) & 255)
1851 + (((pps->pps_cr_qp_offset + s->sh->slice_cr_qp_offset) & 255)
1854 + V4L2_HEVC_PPS_FLAG_CONSTRAINED_INTRA_PRED))
1857 + if (!s->start_ts &&
1858 + (sps->flags & V4L2_HEVC_SPS_FLAG_SCALING_LIST_ENABLED) != 0)
1859 + write_scaling_factors(de);
1861 + if (!s->dependent_slice_segment_flag) {
1862 + int ctb_col = s->sh->slice_segment_addr %
1863 + de->pic_width_in_ctbs_y;
1864 + int ctb_row = s->sh->slice_segment_addr /
1865 + de->pic_width_in_ctbs_y;
1867 + de->reg_slicestart = (ctb_col << 0) + (ctb_row << 16);
1870 + p1_apb_write(de, RPI_SLICESTART, de->reg_slicestart);
1873 +//////////////////////////////////////////////////////////////////////////////
1876 +static void msg_slice(struct rpivid_dec_env *const de, const u16 msg)
1878 + de->slice_msgs[de->num_slice_msgs++] = msg;
1881 +#define CMDS_PROGRAM_SLICECMDS (1 + SLICE_MSGS_MAX)
1882 +static void program_slicecmds(struct rpivid_dec_env *const de,
1883 + const int sliceid)
1887 + p1_apb_write(de, RPI_SLICECMDS, de->num_slice_msgs + (sliceid << 8));
1889 + for (i = 0; i < de->num_slice_msgs; i++)
1890 + p1_apb_write(de, 0x4000 + 4 * i, de->slice_msgs[i] & 0xffff);
1893 +// NoBackwardPredictionFlag 8.3.5
1894 +// Simply checks POCs
1895 +static int has_backward(const struct v4l2_hevc_dpb_entry *const dpb,
1896 + const __u8 *const idx, const unsigned int n,
1897 + const s32 cur_poc)
1901 + for (i = 0; i < n; ++i) {
1902 + if (cur_poc < dpb[idx[i]].pic_order_cnt_val)
1908 +static void pre_slice_decode(struct rpivid_dec_env *const de,
1909 + const struct rpivid_dec_state *const s)
1911 + const struct v4l2_ctrl_hevc_slice_params *const sh = s->sh;
1912 + const struct v4l2_ctrl_hevc_decode_params *const dec = s->dec;
1913 + int weighted_pred_flag, idx;
1915 + unsigned int collocated_from_l0_flag;
1917 + de->num_slice_msgs = 0;
1920 + if (sh->slice_type == HEVC_SLICE_I)
1922 + if (sh->slice_type == HEVC_SLICE_P)
1924 + if (sh->slice_type == HEVC_SLICE_B)
1927 + cmd_slice |= (s->nb_refs[L0] << 2) | (s->nb_refs[L1] << 6) |
1928 + (s->max_num_merge_cand << 11);
1930 + collocated_from_l0_flag =
1931 + !s->slice_temporal_mvp ||
1932 + sh->slice_type != HEVC_SLICE_B ||
1933 + (sh->flags & V4L2_HEVC_SLICE_PARAMS_FLAG_COLLOCATED_FROM_L0);
1934 + cmd_slice |= collocated_from_l0_flag << 14;
1936 + if (sh->slice_type == HEVC_SLICE_P || sh->slice_type == HEVC_SLICE_B) {
1937 + // Flag to say all reference pictures are from the past
1938 + const int no_backward_pred_flag =
1939 + has_backward(dec->dpb, sh->ref_idx_l0, s->nb_refs[L0],
1940 + sh->slice_pic_order_cnt) &&
1941 + has_backward(dec->dpb, sh->ref_idx_l1, s->nb_refs[L1],
1942 + sh->slice_pic_order_cnt);
1943 + cmd_slice |= no_backward_pred_flag << 10;
1944 + msg_slice(de, cmd_slice);
1946 + if (s->slice_temporal_mvp) {
1947 + const __u8 *const rpl = collocated_from_l0_flag ?
1948 + sh->ref_idx_l0 : sh->ref_idx_l1;
1949 + de->dpbno_col = rpl[sh->collocated_ref_idx];
1950 + //v4l2_info(&de->ctx->dev->v4l2_dev,
1951 + // "L0=%d col_ref_idx=%d,
1952 + // dpb_no=%d\n", collocated_from_l0_flag,
1953 + // sh->collocated_ref_idx, de->dpbno_col);
1956 + // Write reference picture descriptions
1957 + weighted_pred_flag =
1958 + sh->slice_type == HEVC_SLICE_P ?
1959 + !!(s->pps.flags & V4L2_HEVC_PPS_FLAG_WEIGHTED_PRED) :
1960 + !!(s->pps.flags & V4L2_HEVC_PPS_FLAG_WEIGHTED_BIPRED);
1962 + for (idx = 0; idx < s->nb_refs[L0]; ++idx) {
1963 + unsigned int dpb_no = sh->ref_idx_l0[idx];
1964 + //v4l2_info(&de->ctx->dev->v4l2_dev,
1965 + // "L0[%d]=dpb[%d]\n", idx, dpb_no);
1969 + ((dec->dpb[dpb_no].flags &
1970 + V4L2_HEVC_DPB_ENTRY_LONG_TERM_REFERENCE) ?
1972 + (weighted_pred_flag ? (3 << 5) : 0));
1973 + msg_slice(de, dec->dpb[dpb_no].pic_order_cnt_val & 0xffff);
1975 + if (weighted_pred_flag) {
1976 + const struct v4l2_hevc_pred_weight_table
1977 + *const w = &sh->pred_weight_table;
1978 + const int luma_weight_denom =
1979 + (1 << w->luma_log2_weight_denom);
1980 + const unsigned int chroma_log2_weight_denom =
1981 + (w->luma_log2_weight_denom +
1982 + w->delta_chroma_log2_weight_denom);
1983 + const int chroma_weight_denom =
1984 + (1 << chroma_log2_weight_denom);
1987 + w->luma_log2_weight_denom |
1988 + (((w->delta_luma_weight_l0[idx] +
1989 + luma_weight_denom) & 0x1ff)
1991 + msg_slice(de, w->luma_offset_l0[idx] & 0xff);
1993 + chroma_log2_weight_denom |
1994 + (((w->delta_chroma_weight_l0[idx][0] +
1995 + chroma_weight_denom) & 0x1ff)
1998 + w->chroma_offset_l0[idx][0] & 0xff);
2000 + chroma_log2_weight_denom |
2001 + (((w->delta_chroma_weight_l0[idx][1] +
2002 + chroma_weight_denom) & 0x1ff)
2005 + w->chroma_offset_l0[idx][1] & 0xff);
2009 + for (idx = 0; idx < s->nb_refs[L1]; ++idx) {
2010 + unsigned int dpb_no = sh->ref_idx_l1[idx];
2011 + //v4l2_info(&de->ctx->dev->v4l2_dev,
2012 + // "L1[%d]=dpb[%d]\n", idx, dpb_no);
2015 + ((dec->dpb[dpb_no].flags &
2016 + V4L2_HEVC_DPB_ENTRY_LONG_TERM_REFERENCE) ?
2018 + (weighted_pred_flag ? (3 << 5) : 0));
2019 + msg_slice(de, dec->dpb[dpb_no].pic_order_cnt_val & 0xffff);
2020 + if (weighted_pred_flag) {
2021 + const struct v4l2_hevc_pred_weight_table
2022 + *const w = &sh->pred_weight_table;
2023 + const int luma_weight_denom =
2024 + (1 << w->luma_log2_weight_denom);
2025 + const unsigned int chroma_log2_weight_denom =
2026 + (w->luma_log2_weight_denom +
2027 + w->delta_chroma_log2_weight_denom);
2028 + const int chroma_weight_denom =
2029 + (1 << chroma_log2_weight_denom);
2032 + w->luma_log2_weight_denom |
2033 + (((w->delta_luma_weight_l1[idx] +
2034 + luma_weight_denom) & 0x1ff) << 3));
2035 + msg_slice(de, w->luma_offset_l1[idx] & 0xff);
2037 + chroma_log2_weight_denom |
2038 + (((w->delta_chroma_weight_l1[idx][0] +
2039 + chroma_weight_denom) & 0x1ff)
2042 + w->chroma_offset_l1[idx][0] & 0xff);
2044 + chroma_log2_weight_denom |
2045 + (((w->delta_chroma_weight_l1[idx][1] +
2046 + chroma_weight_denom) & 0x1ff)
2049 + w->chroma_offset_l1[idx][1] & 0xff);
2053 + msg_slice(de, cmd_slice);
2057 + (sh->slice_beta_offset_div2 & 15) |
2058 + ((sh->slice_tc_offset_div2 & 15) << 4) |
2060 + V4L2_HEVC_SLICE_PARAMS_FLAG_SLICE_DEBLOCKING_FILTER_DISABLED) ?
2063 + V4L2_HEVC_SLICE_PARAMS_FLAG_SLICE_LOOP_FILTER_ACROSS_SLICES_ENABLED) ?
2066 + V4L2_HEVC_PPS_FLAG_LOOP_FILTER_ACROSS_TILES_ENABLED) ?
2069 + msg_slice(de, ((sh->slice_cr_qp_offset & 31) << 5) +
2070 + (sh->slice_cb_qp_offset & 31)); // CMD_QPOFF
2073 +#define CMDS_WRITE_SLICE 1
2074 +static void write_slice(struct rpivid_dec_env *const de,
2075 + const struct rpivid_dec_state *const s,
2076 + const u32 slice_const,
2077 + const unsigned int ctb_col,
2078 + const unsigned int ctb_row)
2080 + const unsigned int cs = (1 << s->log2_ctb_size);
2081 + const unsigned int w_last = s->sps.pic_width_in_luma_samples & (cs - 1);
2082 + const unsigned int h_last = s->sps.pic_height_in_luma_samples & (cs - 1);
2084 + p1_apb_write(de, RPI_SLICE,
2086 + ((ctb_col + 1 < s->ctb_width || !w_last ?
2087 + cs : w_last) << 17) |
2088 + ((ctb_row + 1 < s->ctb_height || !h_last ?
2089 + cs : h_last) << 24));
2092 +#define PAUSE_MODE_WPP 1
2093 +#define PAUSE_MODE_TILE 0xffff
2096 + * N.B. This can be called to fill in data from the previous slice so must not
2097 + * use any state data that may change from slice to slice (e.g. qp)
2099 +#define CMDS_NEW_ENTRY_POINT (6 + CMDS_WRITE_SLICE)
2100 +static void new_entry_point(struct rpivid_dec_env *const de,
2101 + const struct rpivid_dec_state *const s,
2102 + const bool do_bte,
2103 + const bool reset_qp_y,
2104 + const u32 pause_mode,
2105 + const unsigned int tile_x,
2106 + const unsigned int tile_y,
2107 + const unsigned int ctb_col,
2108 + const unsigned int ctb_row,
2109 + const unsigned int slice_qp,
2110 + const u32 slice_const)
2112 + const unsigned int endx = s->col_bd[tile_x + 1] - 1;
2113 + const unsigned int endy = (pause_mode == PAUSE_MODE_WPP) ?
2114 + ctb_row : s->row_bd[tile_y + 1] - 1;
2116 + p1_apb_write(de, RPI_TILESTART,
2117 + s->col_bd[tile_x] | (s->row_bd[tile_y] << 16));
2118 + p1_apb_write(de, RPI_TILEEND, endx | (endy << 16));
2121 + p1_apb_write(de, RPI_BEGINTILEEND, endx | (endy << 16));
2123 + write_slice(de, s, slice_const, endx, endy);
2126 + unsigned int sps_qp_bd_offset =
2127 + 6 * s->sps.bit_depth_luma_minus8;
2129 + p1_apb_write(de, RPI_QP, sps_qp_bd_offset + slice_qp);
2132 + p1_apb_write(de, RPI_MODE,
2134 + ((endx == s->ctb_width - 1) << 17) |
2135 + ((endy == s->ctb_height - 1) << 18));
2137 + p1_apb_write(de, RPI_CONTROL, (ctb_col << 0) | (ctb_row << 16));
2139 + de->entry_tile_x = tile_x;
2140 + de->entry_tile_y = tile_y;
2141 + de->entry_ctb_x = ctb_col;
2142 + de->entry_ctb_y = ctb_row;
2143 + de->entry_qp = slice_qp;
2144 + de->entry_slice = slice_const;
2147 +//////////////////////////////////////////////////////////////////////////////
2150 +#define CMDS_WPP_PAUSE 4
2151 +static void wpp_pause(struct rpivid_dec_env *const de, int ctb_row)
2153 + p1_apb_write(de, RPI_STATUS, (ctb_row << 18) | 0x25);
2154 + p1_apb_write(de, RPI_TRANSFER, PROB_BACKUP);
2155 + p1_apb_write(de, RPI_MODE,
2156 + ctb_row == de->pic_height_in_ctbs_y - 1 ?
2157 + 0x70000 : 0x30000);
2158 + p1_apb_write(de, RPI_CONTROL, (ctb_row << 16) + 2);
2161 +#define CMDS_WPP_ENTRY_FILL_1 (CMDS_WPP_PAUSE + 2 + CMDS_NEW_ENTRY_POINT)
2162 +static int wpp_entry_fill(struct rpivid_dec_env *const de,
2163 + const struct rpivid_dec_state *const s,
2164 + const unsigned int last_y)
2167 + const unsigned int last_x = s->ctb_width - 1;
2169 + rv = cmds_check_space(de, CMDS_WPP_ENTRY_FILL_1 *
2170 + (last_y - de->entry_ctb_y));
2174 + while (de->entry_ctb_y < last_y) {
2175 + /* wpp_entry_x/y set by wpp_entry_point */
2176 + if (s->ctb_width > 2)
2177 + wpp_pause(de, de->entry_ctb_y);
2178 + p1_apb_write(de, RPI_STATUS,
2179 + (de->entry_ctb_y << 18) | (last_x << 5) | 2);
2181 + /* if width == 1 then the saved state is the init one */
2182 + if (s->ctb_width == 2)
2183 + p1_apb_write(de, RPI_TRANSFER, PROB_BACKUP);
2185 + p1_apb_write(de, RPI_TRANSFER, PROB_RELOAD);
2187 + new_entry_point(de, s, false, true, PAUSE_MODE_WPP,
2188 + 0, 0, 0, de->entry_ctb_y + 1,
2189 + de->entry_qp, de->entry_slice);
2194 +static int wpp_end_previous_slice(struct rpivid_dec_env *const de,
2195 + const struct rpivid_dec_state *const s)
2199 + rv = wpp_entry_fill(de, s, s->prev_ctb_y);
2203 + rv = cmds_check_space(de, CMDS_WPP_PAUSE + 2);
2207 + if (de->entry_ctb_x < 2 &&
2208 + (de->entry_ctb_y < s->start_ctb_y || s->start_ctb_x > 2) &&
2210 + wpp_pause(de, s->prev_ctb_y);
2211 + p1_apb_write(de, RPI_STATUS,
2212 + 1 | (s->prev_ctb_x << 5) | (s->prev_ctb_y << 18));
2213 + if (s->start_ctb_x == 2 ||
2214 + (s->ctb_width == 2 && de->entry_ctb_y < s->start_ctb_y))
2215 + p1_apb_write(de, RPI_TRANSFER, PROB_BACKUP);
2219 +/* Only main profile supported so WPP => !Tiles which makes some of the
2220 + * next chunk code simpler
2222 +static int wpp_decode_slice(struct rpivid_dec_env *const de,
2223 + const struct rpivid_dec_state *const s,
2226 + bool reset_qp_y = true;
2227 + const bool indep = !s->dependent_slice_segment_flag;
2230 + if (s->start_ts) {
2231 + rv = wpp_end_previous_slice(de, s);
2235 + pre_slice_decode(de, s);
2237 + rv = cmds_check_space(de,
2238 + CMDS_WRITE_BITSTREAM +
2240 + CMDS_PROGRAM_SLICECMDS +
2241 + CMDS_NEW_SLICE_SEGMENT +
2242 + CMDS_NEW_ENTRY_POINT);
2246 + rv = write_bitstream(de, s);
2250 + if (!s->start_ts || indep || s->ctb_width == 1)
2251 + write_prob(de, s);
2252 + else if (!s->start_ctb_x)
2253 + p1_apb_write(de, RPI_TRANSFER, PROB_RELOAD);
2255 + reset_qp_y = false;
2257 + program_slicecmds(de, s->slice_idx);
2258 + new_slice_segment(de, s);
2259 + new_entry_point(de, s, indep, reset_qp_y, PAUSE_MODE_WPP,
2260 + 0, 0, s->start_ctb_x, s->start_ctb_y,
2261 + s->slice_qp, slice_reg_const(s));
2264 + rv = wpp_entry_fill(de, s, s->ctb_height - 1);
2268 + rv = cmds_check_space(de, CMDS_WPP_PAUSE + 1);
2272 + if (de->entry_ctb_x < 2 && s->ctb_width > 2)
2273 + wpp_pause(de, s->ctb_height - 1);
2275 + p1_apb_write(de, RPI_STATUS,
2276 + 1 | ((s->ctb_width - 1) << 5) |
2277 + ((s->ctb_height - 1) << 18));
2282 +//////////////////////////////////////////////////////////////////////////////
2285 +// Guarantees 1 cmd entry free on exit
2286 +static int tile_entry_fill(struct rpivid_dec_env *const de,
2287 + const struct rpivid_dec_state *const s,
2288 + const unsigned int last_tile_x,
2289 + const unsigned int last_tile_y)
2291 + while (de->entry_tile_y < last_tile_y ||
2292 + (de->entry_tile_y == last_tile_y &&
2293 + de->entry_tile_x < last_tile_x)) {
2295 + unsigned int t_x = de->entry_tile_x;
2296 + unsigned int t_y = de->entry_tile_y;
2297 + const unsigned int last_x = s->col_bd[t_x + 1] - 1;
2298 + const unsigned int last_y = s->row_bd[t_y + 1] - 1;
2300 + // One more than needed here
2301 + rv = cmds_check_space(de, CMDS_NEW_ENTRY_POINT + 3);
2305 + p1_apb_write(de, RPI_STATUS,
2306 + 2 | (last_x << 5) | (last_y << 18));
2307 + p1_apb_write(de, RPI_TRANSFER, PROB_RELOAD);
2310 + if (++t_x >= s->tile_width) {
2315 + new_entry_point(de, s, false, true, PAUSE_MODE_TILE,
2316 + t_x, t_y, s->col_bd[t_x], s->row_bd[t_y],
2317 + de->entry_qp, de->entry_slice);
2323 + * Write STATUS register with expected end CTU address of previous slice
2325 +static int end_previous_slice(struct rpivid_dec_env *const de,
2326 + const struct rpivid_dec_state *const s)
2330 + rv = tile_entry_fill(de, s,
2331 + ctb_to_tile_x(s, s->prev_ctb_x),
2332 + ctb_to_tile_y(s, s->prev_ctb_y));
2336 + p1_apb_write(de, RPI_STATUS,
2337 + 1 | (s->prev_ctb_x << 5) | (s->prev_ctb_y << 18));
2341 +static int decode_slice(struct rpivid_dec_env *const de,
2342 + const struct rpivid_dec_state *const s,
2346 + unsigned int tile_x = ctb_to_tile_x(s, s->start_ctb_x);
2347 + unsigned int tile_y = ctb_to_tile_y(s, s->start_ctb_y);
2350 + if (s->start_ts) {
2351 + rv = end_previous_slice(de, s);
2356 + rv = cmds_check_space(de,
2357 + CMDS_WRITE_BITSTREAM +
2359 + CMDS_PROGRAM_SLICECMDS +
2360 + CMDS_NEW_SLICE_SEGMENT +
2361 + CMDS_NEW_ENTRY_POINT);
2365 + pre_slice_decode(de, s);
2366 + rv = write_bitstream(de, s);
2370 + reset_qp_y = !s->start_ts ||
2371 + !s->dependent_slice_segment_flag ||
2372 + tile_x != ctb_to_tile_x(s, s->prev_ctb_x) ||
2373 + tile_y != ctb_to_tile_y(s, s->prev_ctb_y);
2375 + write_prob(de, s);
2377 + program_slicecmds(de, s->slice_idx);
2378 + new_slice_segment(de, s);
2379 + new_entry_point(de, s, !s->dependent_slice_segment_flag, reset_qp_y,
2381 + tile_x, tile_y, s->start_ctb_x, s->start_ctb_y,
2382 + s->slice_qp, slice_reg_const(s));
2385 + * If this is the last slice then fill in the other tile entries
2386 + * now, otherwise this will be done at the start of the next slice
2387 + * when it will be known where this slice finishes
2390 + rv = tile_entry_fill(de, s,
2391 + s->tile_width - 1,
2392 + s->tile_height - 1);
2395 + p1_apb_write(de, RPI_STATUS,
2396 + 1 | ((s->ctb_width - 1) << 5) |
2397 + ((s->ctb_height - 1) << 18));
2402 +//////////////////////////////////////////////////////////////////////////////
2405 +static void expand_scaling_list(const unsigned int size_id,
2407 + const u8 *const src0, uint8_t dc)
2410 + unsigned int x, y;
2412 + switch (size_id) {
2414 + memcpy(dst0, src0, 16);
2417 + memcpy(dst0, src0, 64);
2422 + for (y = 0; y != 16; y++) {
2423 + const u8 *s = src0 + (y >> 1) * 8;
2425 + for (x = 0; x != 8; ++x) {
2435 + for (y = 0; y != 32; y++) {
2436 + const u8 *s = src0 + (y >> 2) * 8;
2438 + for (x = 0; x != 8; ++x) {
2450 +static void populate_scaling_factors(const struct rpivid_run *const run,
2451 + struct rpivid_dec_env *const de,
2452 + const struct rpivid_dec_state *const s)
2454 + const struct v4l2_ctrl_hevc_scaling_matrix *const sl =
2455 + run->h265.scaling_matrix;
2456 + // Array of constants for scaling factors
2457 + static const u32 scaling_factor_offsets[4][6] = {
2458 + // MID0 MID1 MID2 MID3 MID4 MID5
2460 + { 0x0000, 0x0010, 0x0020, 0x0030, 0x0040, 0x0050 },
2462 + { 0x0060, 0x00A0, 0x00E0, 0x0120, 0x0160, 0x01A0 },
2464 + { 0x01E0, 0x02E0, 0x03E0, 0x04E0, 0x05E0, 0x06E0 },
2466 + { 0x07E0, 0x0BE0, 0x0000, 0x0000, 0x0000, 0x0000 }
2471 + for (mid = 0; mid < 6; mid++)
2472 + expand_scaling_list(0, de->scaling_factors +
2473 + scaling_factor_offsets[0][mid],
2474 + sl->scaling_list_4x4[mid], 0);
2475 + for (mid = 0; mid < 6; mid++)
2476 + expand_scaling_list(1, de->scaling_factors +
2477 + scaling_factor_offsets[1][mid],
2478 + sl->scaling_list_8x8[mid], 0);
2479 + for (mid = 0; mid < 6; mid++)
2480 + expand_scaling_list(2, de->scaling_factors +
2481 + scaling_factor_offsets[2][mid],
2482 + sl->scaling_list_16x16[mid],
2483 + sl->scaling_list_dc_coef_16x16[mid]);
2484 + for (mid = 0; mid < 2; mid++)
2485 + expand_scaling_list(3, de->scaling_factors +
2486 + scaling_factor_offsets[3][mid],
2487 + sl->scaling_list_32x32[mid],
2488 + sl->scaling_list_dc_coef_32x32[mid]);
2491 +static void free_ps_info(struct rpivid_dec_state *const s)
2493 + kfree(s->ctb_addr_rs_to_ts);
2494 + s->ctb_addr_rs_to_ts = NULL;
2495 + kfree(s->ctb_addr_ts_to_rs);
2496 + s->ctb_addr_ts_to_rs = NULL;
2504 +static unsigned int tile_width(const struct rpivid_dec_state *const s,
2505 + const unsigned int t_x)
2507 + return s->col_bd[t_x + 1] - s->col_bd[t_x];
2510 +static unsigned int tile_height(const struct rpivid_dec_state *const s,
2511 + const unsigned int t_y)
2513 + return s->row_bd[t_y + 1] - s->row_bd[t_y];
2516 +static void fill_rs_to_ts(struct rpivid_dec_state *const s)
2518 + unsigned int ts = 0;
2520 + unsigned int tr_rs = 0;
2522 + for (t_y = 0; t_y != s->tile_height; ++t_y) {
2523 + const unsigned int t_h = tile_height(s, t_y);
2525 + unsigned int tc_rs = tr_rs;
2527 + for (t_x = 0; t_x != s->tile_width; ++t_x) {
2528 + const unsigned int t_w = tile_width(s, t_x);
2530 + unsigned int rs = tc_rs;
2532 + for (y = 0; y != t_h; ++y) {
2535 + for (x = 0; x != t_w; ++x) {
2536 + s->ctb_addr_rs_to_ts[rs + x] = ts;
2537 + s->ctb_addr_ts_to_rs[ts] = rs + x;
2540 + rs += s->ctb_width;
2544 + tr_rs += t_h * s->ctb_width;
2548 +static int updated_ps(struct rpivid_dec_state *const s)
2554 + // Inferred parameters
2555 + s->log2_ctb_size = s->sps.log2_min_luma_coding_block_size_minus3 + 3 +
2556 + s->sps.log2_diff_max_min_luma_coding_block_size;
2558 + s->ctb_width = (s->sps.pic_width_in_luma_samples +
2559 + (1 << s->log2_ctb_size) - 1) >>
2561 + s->ctb_height = (s->sps.pic_height_in_luma_samples +
2562 + (1 << s->log2_ctb_size) - 1) >>
2564 + s->ctb_size = s->ctb_width * s->ctb_height;
2566 + // Inferred parameters
2568 + s->ctb_addr_rs_to_ts = kmalloc_array(s->ctb_size,
2569 + sizeof(*s->ctb_addr_rs_to_ts),
2571 + if (!s->ctb_addr_rs_to_ts)
2573 + s->ctb_addr_ts_to_rs = kmalloc_array(s->ctb_size,
2574 + sizeof(*s->ctb_addr_ts_to_rs),
2576 + if (!s->ctb_addr_ts_to_rs)
2579 + if (!(s->pps.flags & V4L2_HEVC_PPS_FLAG_TILES_ENABLED)) {
2580 + s->tile_width = 1;
2581 + s->tile_height = 1;
2583 + s->tile_width = s->pps.num_tile_columns_minus1 + 1;
2584 + s->tile_height = s->pps.num_tile_rows_minus1 + 1;
2587 + s->col_bd = kmalloc((s->tile_width + 1) * sizeof(*s->col_bd),
2591 + s->row_bd = kmalloc((s->tile_height + 1) * sizeof(*s->row_bd),
2597 + for (i = 1; i < s->tile_width; i++)
2598 + s->col_bd[i] = s->col_bd[i - 1] +
2599 + s->pps.column_width_minus1[i - 1] + 1;
2600 + s->col_bd[s->tile_width] = s->ctb_width;
2603 + for (i = 1; i < s->tile_height; i++)
2604 + s->row_bd[i] = s->row_bd[i - 1] +
2605 + s->pps.row_height_minus1[i - 1] + 1;
2606 + s->row_bd[s->tile_height] = s->ctb_height;
2613 + /* Set invalid to force reload */
2614 + s->sps.pic_width_in_luma_samples = 0;
2618 +static int write_cmd_buffer(struct rpivid_dev *const dev,
2619 + struct rpivid_dec_env *const de,
2620 + const struct rpivid_dec_state *const s)
2622 + const size_t cmd_size = ALIGN(de->cmd_len * sizeof(de->cmd_fifo[0]),
2623 + dev->cache_align);
2625 + de->cmd_addr = dma_map_single(dev->dev, de->cmd_fifo,
2626 + cmd_size, DMA_TO_DEVICE);
2627 + if (dma_mapping_error(dev->dev, de->cmd_addr)) {
2628 + v4l2_err(&dev->v4l2_dev,
2629 + "Map cmd buffer (%zu): FAILED\n", cmd_size);
2632 + de->cmd_size = cmd_size;
2636 +static void setup_colmv(struct rpivid_ctx *const ctx, struct rpivid_run *run,
2637 + struct rpivid_dec_state *const s)
2639 + ctx->colmv_stride = ALIGN(s->sps.pic_width_in_luma_samples, 64);
2640 + ctx->colmv_picsize = ctx->colmv_stride *
2641 + (ALIGN(s->sps.pic_height_in_luma_samples, 64) >> 4);
2644 +// Can be called from irq context
2645 +static struct rpivid_dec_env *dec_env_new(struct rpivid_ctx *const ctx)
2647 + struct rpivid_dec_env *de;
2648 + unsigned long lock_flags;
2650 + spin_lock_irqsave(&ctx->dec_lock, lock_flags);
2652 + de = ctx->dec_free;
2654 + ctx->dec_free = de->next;
2656 + de->state = RPIVID_DECODE_SLICE_START;
2659 + spin_unlock_irqrestore(&ctx->dec_lock, lock_flags);
2663 +// Can be called from irq context
2664 +static void dec_env_delete(struct rpivid_dec_env *const de)
2666 + struct rpivid_ctx * const ctx = de->ctx;
2667 + unsigned long lock_flags;
2669 + if (de->cmd_size) {
2670 + dma_unmap_single(ctx->dev->dev, de->cmd_addr, de->cmd_size,
2675 + aux_q_release(ctx, &de->frame_aux);
2676 + aux_q_release(ctx, &de->col_aux);
2678 + spin_lock_irqsave(&ctx->dec_lock, lock_flags);
2680 + de->state = RPIVID_DECODE_END;
2681 + de->next = ctx->dec_free;
2682 + ctx->dec_free = de;
2684 + spin_unlock_irqrestore(&ctx->dec_lock, lock_flags);
2687 +static void dec_env_uninit(struct rpivid_ctx *const ctx)
2691 + if (ctx->dec_pool) {
2692 + for (i = 0; i != RPIVID_DEC_ENV_COUNT; ++i) {
2693 + struct rpivid_dec_env *const de = ctx->dec_pool + i;
2695 + kfree(de->cmd_fifo);
2698 + kfree(ctx->dec_pool);
2701 + ctx->dec_pool = NULL;
2702 + ctx->dec_free = NULL;
2705 +static int dec_env_init(struct rpivid_ctx *const ctx)
2709 + ctx->dec_pool = kzalloc(sizeof(*ctx->dec_pool) * RPIVID_DEC_ENV_COUNT,
2711 + if (!ctx->dec_pool)
2714 + spin_lock_init(&ctx->dec_lock);
2716 + // Build free chain
2717 + ctx->dec_free = ctx->dec_pool;
2718 + for (i = 0; i != RPIVID_DEC_ENV_COUNT - 1; ++i)
2719 + ctx->dec_pool[i].next = ctx->dec_pool + i + 1;
2721 + // Fill in other bits
2722 + for (i = 0; i != RPIVID_DEC_ENV_COUNT; ++i) {
2723 + struct rpivid_dec_env *const de = ctx->dec_pool + i;
2726 + de->decode_order = i;
2727 +// de->cmd_max = 1024;
2728 + de->cmd_max = 8096;
2729 + de->cmd_fifo = kmalloc_array(de->cmd_max,
2730 + sizeof(struct rpi_cmd),
2732 + if (!de->cmd_fifo)
2739 + dec_env_uninit(ctx);
2743 +// Assume that we get exactly the same DPB for every slice
2744 +// it makes no real sense otherwise
2745 +#if V4L2_HEVC_DPB_ENTRIES_NUM_MAX > 16
2746 +#error HEVC_DPB_ENTRIES > h/w slots
2749 +static u32 mk_config2(const struct rpivid_dec_state *const s)
2751 + const struct v4l2_ctrl_hevc_sps *const sps = &s->sps;
2752 + const struct v4l2_ctrl_hevc_pps *const pps = &s->pps;
2755 + c = (sps->bit_depth_luma_minus8 + 8) << 0;
2757 + c |= (sps->bit_depth_chroma_minus8 + 8) << 4;
2759 + if (sps->bit_depth_luma_minus8)
2762 + if (sps->bit_depth_chroma_minus8)
2764 + c |= s->log2_ctb_size << 10;
2765 + if (pps->flags & V4L2_HEVC_PPS_FLAG_CONSTRAINED_INTRA_PRED)
2767 + if (sps->flags & V4L2_HEVC_SPS_FLAG_STRONG_INTRA_SMOOTHING_ENABLED)
2770 + c |= BIT(15); /* Write motion vectors to external memory */
2771 + c |= (pps->log2_parallel_merge_level_minus2 + 2) << 16;
2772 + if (s->slice_temporal_mvp)
2774 + if (sps->flags & V4L2_HEVC_SPS_FLAG_PCM_LOOP_FILTER_DISABLED)
2776 + c |= (pps->pps_cb_qp_offset & 31) << 21;
2777 + c |= (pps->pps_cr_qp_offset & 31) << 26;
2781 +static inline bool is_ref_unit_type(const unsigned int nal_unit_type)
2784 + * True for 1, 3, 5, 7, 9, 11, 13, 15
2786 + return (nal_unit_type & ~0xe) != 0;
2789 +static void rpivid_h265_setup(struct rpivid_ctx *ctx, struct rpivid_run *run)
2791 + struct rpivid_dev *const dev = ctx->dev;
2792 + const struct v4l2_ctrl_hevc_decode_params *const dec =
2794 + /* sh0 used where slice header contents should be constant over all
2795 + * slices, or first slice of frame
2797 + const struct v4l2_ctrl_hevc_slice_params *const sh0 =
2798 + run->h265.slice_params;
2799 + struct rpivid_q_aux *dpb_q_aux[V4L2_HEVC_DPB_ENTRIES_NUM_MAX];
2800 + struct rpivid_dec_state *const s = ctx->state;
2801 + struct vb2_queue *vq;
2802 + struct rpivid_dec_env *de = ctx->dec0;
2803 + unsigned int prev_rs;
2806 + bool slice_temporal_mvp;
2809 + xtrace_in(dev, de);
2810 + s->sh = NULL; // Avoid use until in the slice loop
2813 + ((run->src->flags & V4L2_BUF_FLAG_M2M_HOLD_CAPTURE_BUF) == 0);
2815 + slice_temporal_mvp = (sh0->flags &
2816 + V4L2_HEVC_SLICE_PARAMS_FLAG_SLICE_TEMPORAL_MVP_ENABLED);
2818 + if (de && de->state != RPIVID_DECODE_END) {
2819 + switch (de->state) {
2820 + case RPIVID_DECODE_SLICE_CONTINUE:
2824 + v4l2_err(&dev->v4l2_dev, "%s: Unexpected state: %d\n",
2825 + __func__, de->state);
2827 + case RPIVID_DECODE_ERROR_CONTINUE:
2828 + // Uncleared error - fail now
2832 + if (s->slice_temporal_mvp != slice_temporal_mvp) {
2833 + v4l2_warn(&dev->v4l2_dev,
2834 + "Slice Temporal MVP non-constant\n");
2839 + unsigned int ctb_size_y;
2840 + bool sps_changed = false;
2842 + if (memcmp(&s->sps, run->h265.sps, sizeof(s->sps)) != 0) {
2844 + v4l2_info(&dev->v4l2_dev, "SPS changed\n");
2845 + memcpy(&s->sps, run->h265.sps, sizeof(s->sps));
2846 + sps_changed = true;
2848 + if (sps_changed ||
2849 + memcmp(&s->pps, run->h265.pps, sizeof(s->pps)) != 0) {
2851 + v4l2_info(&dev->v4l2_dev, "PPS changed\n");
2852 + memcpy(&s->pps, run->h265.pps, sizeof(s->pps));
2854 + /* Recalc stuff as required */
2855 + rv = updated_ps(s);
2860 + de = dec_env_new(ctx);
2862 + v4l2_err(&dev->v4l2_dev,
2863 + "Failed to find free decode env\n");
2869 + 1U << (s->sps.log2_min_luma_coding_block_size_minus3 +
2871 + s->sps.log2_diff_max_min_luma_coding_block_size);
2873 + de->pic_width_in_ctbs_y =
2874 + (s->sps.pic_width_in_luma_samples + ctb_size_y - 1) /
2875 + ctb_size_y; // 7-15
2876 + de->pic_height_in_ctbs_y =
2877 + (s->sps.pic_height_in_luma_samples + ctb_size_y - 1) /
2878 + ctb_size_y; // 7-17
2880 + de->dpbno_col = ~0U;
2882 + de->bit_copy_gptr = ctx->bitbufs + ctx->p1idx;
2883 + de->bit_copy_len = 0;
2885 + de->frame_c_offset = ctx->dst_fmt.height * 128;
2886 + de->frame_stride = ctx->dst_fmt.plane_fmt[0].bytesperline * 128;
2888 + vb2_dma_contig_plane_dma_addr(&run->dst->vb2_buf, 0);
2889 + de->frame_aux = NULL;
2891 + if (s->sps.bit_depth_luma_minus8 !=
2892 + s->sps.bit_depth_chroma_minus8) {
2893 + v4l2_warn(&dev->v4l2_dev,
2894 + "Chroma depth (%d) != Luma depth (%d)\n",
2895 + s->sps.bit_depth_chroma_minus8 + 8,
2896 + s->sps.bit_depth_luma_minus8 + 8);
2899 + if (s->sps.bit_depth_luma_minus8 == 0) {
2900 + if (ctx->dst_fmt.pixelformat !=
2901 + V4L2_PIX_FMT_NV12_COL128) {
2902 + v4l2_err(&dev->v4l2_dev,
2903 + "Pixel format %#x != NV12_COL128 for 8-bit output",
2904 + ctx->dst_fmt.pixelformat);
2907 + } else if (s->sps.bit_depth_luma_minus8 == 2) {
2908 + if (ctx->dst_fmt.pixelformat !=
2909 + V4L2_PIX_FMT_NV12_10_COL128) {
2910 + v4l2_err(&dev->v4l2_dev,
2911 + "Pixel format %#x != NV12_10_COL128 for 10-bit output",
2912 + ctx->dst_fmt.pixelformat);
2916 + v4l2_warn(&dev->v4l2_dev,
2917 + "Luma depth (%d) unsupported\n",
2918 + s->sps.bit_depth_luma_minus8 + 8);
2921 + if (run->dst->vb2_buf.num_planes != 1) {
2922 + v4l2_warn(&dev->v4l2_dev, "Capture planes (%d) != 1\n",
2923 + run->dst->vb2_buf.num_planes);
2926 + if (run->dst->planes[0].length <
2927 + ctx->dst_fmt.plane_fmt[0].sizeimage) {
2928 + v4l2_warn(&dev->v4l2_dev,
2929 + "Capture plane[0] length (%d) < sizeimage (%d)\n",
2930 + run->dst->planes[0].length,
2931 + ctx->dst_fmt.plane_fmt[0].sizeimage);
2935 + // Fill in ref planes with our address s.t. if we mess
2936 + // up refs somehow then we still have a valid address
2938 + for (i = 0; i != 16; ++i)
2939 + de->ref_addrs[i] = de->frame_addr;
2942 + * Stash initial temporal_mvp flag
2943 + * This must be the same for all pic slices (7.4.7.1)
2945 + s->slice_temporal_mvp = slice_temporal_mvp;
2948 + * Need Aux ents for all (ref) DPB ents if temporal MV could
2949 + * be enabled for any pic
2951 + s->use_aux = ((s->sps.flags &
2952 + V4L2_HEVC_SPS_FLAG_SPS_TEMPORAL_MVP_ENABLED) != 0);
2953 + s->mk_aux = s->use_aux &&
2954 + (s->sps.sps_max_sub_layers_minus1 >= sh0->nuh_temporal_id_plus1 ||
2955 + is_ref_unit_type(sh0->nal_unit_type));
2957 + // Phase 2 reg pre-calc
2958 + de->rpi_config2 = mk_config2(s);
2959 + de->rpi_framesize = (s->sps.pic_height_in_luma_samples << 16) |
2960 + s->sps.pic_width_in_luma_samples;
2961 + de->rpi_currpoc = sh0->slice_pic_order_cnt;
2963 + if (s->sps.flags &
2964 + V4L2_HEVC_SPS_FLAG_SPS_TEMPORAL_MVP_ENABLED) {
2965 + setup_colmv(ctx, run, s);
2970 + if (sh0->slice_segment_addr != 0) {
2971 + v4l2_warn(&dev->v4l2_dev,
2972 + "New frame but segment_addr=%d\n",
2973 + sh0->slice_segment_addr);
2977 + /* Allocate a bitbuf if we need one - don't need one if single
2978 + * slice as we can use the src buf directly
2980 + if (!frame_end && !de->bit_copy_gptr->ptr) {
2981 + size_t bits_alloc;
2982 + bits_alloc = rpivid_bit_buf_size(s->sps.pic_width_in_luma_samples,
2983 + s->sps.pic_height_in_luma_samples,
2984 + s->sps.bit_depth_luma_minus8);
2986 + if (gptr_alloc(dev, de->bit_copy_gptr,
2988 + DMA_ATTR_FORCE_CONTIGUOUS) != 0) {
2989 + v4l2_err(&dev->v4l2_dev,
2990 + "Unable to alloc buf (%zu) for bit copy\n",
2994 + v4l2_info(&dev->v4l2_dev,
2995 + "Alloc buf (%zu) for bit copy OK\n",
3000 + // Either map src buffer or use directly
3002 + s->src_buf = NULL;
3005 + s->src_addr = vb2_dma_contig_plane_dma_addr(&run->src->vb2_buf,
3008 + s->src_buf = vb2_plane_vaddr(&run->src->vb2_buf, 0);
3009 + if (!s->src_addr && !s->src_buf) {
3010 + v4l2_err(&dev->v4l2_dev, "Failed to map src buffer\n");
3014 + // Pre calc a few things
3016 + for (i = 0; i != run->h265.slice_ents; ++i) {
3017 + const struct v4l2_ctrl_hevc_slice_params *const sh = sh0 + i;
3018 + const bool last_slice = frame_end && i + 1 == run->h265.slice_ents;
3022 + if (run->src->planes[0].bytesused < (sh->bit_size + 7) / 8) {
3023 + v4l2_warn(&dev->v4l2_dev,
3024 + "Bit size %d > bytesused %d\n",
3025 + sh->bit_size, run->src->planes[0].bytesused);
3028 + if (sh->data_byte_offset >= sh->bit_size / 8) {
3029 + v4l2_warn(&dev->v4l2_dev,
3030 + "Bit size %u < Byte offset %u * 8\n",
3031 + sh->bit_size, sh->data_byte_offset);
3035 + s->slice_qp = 26 + s->pps.init_qp_minus26 + sh->slice_qp_delta;
3036 + s->max_num_merge_cand = sh->slice_type == HEVC_SLICE_I ?
3038 + (5 - sh->five_minus_max_num_merge_cand);
3039 + s->dependent_slice_segment_flag =
3041 + V4L2_HEVC_SLICE_PARAMS_FLAG_DEPENDENT_SLICE_SEGMENT) != 0);
3043 + s->nb_refs[0] = (sh->slice_type == HEVC_SLICE_I) ?
3045 + sh->num_ref_idx_l0_active_minus1 + 1;
3046 + s->nb_refs[1] = (sh->slice_type != HEVC_SLICE_B) ?
3048 + sh->num_ref_idx_l1_active_minus1 + 1;
3050 + if (s->sps.flags & V4L2_HEVC_SPS_FLAG_SCALING_LIST_ENABLED)
3051 + populate_scaling_factors(run, de, s);
3053 + /* Calc all the random coord info to avoid repeated conversion in/out */
3054 + s->start_ts = s->ctb_addr_rs_to_ts[sh->slice_segment_addr];
3055 + s->start_ctb_x = sh->slice_segment_addr % de->pic_width_in_ctbs_y;
3056 + s->start_ctb_y = sh->slice_segment_addr / de->pic_width_in_ctbs_y;
3057 + /* Last CTB of previous slice */
3058 + prev_rs = !s->start_ts ? 0 : s->ctb_addr_ts_to_rs[s->start_ts - 1];
3059 + s->prev_ctb_x = prev_rs % de->pic_width_in_ctbs_y;
3060 + s->prev_ctb_y = prev_rs / de->pic_width_in_ctbs_y;
3062 + if ((s->pps.flags & V4L2_HEVC_PPS_FLAG_ENTROPY_CODING_SYNC_ENABLED))
3063 + rv = wpp_decode_slice(de, s, last_slice);
3065 + rv = decode_slice(de, s, last_slice);
3073 + xtrace_ok(dev, de);
3078 + memset(dpb_q_aux, 0,
3079 + sizeof(*dpb_q_aux) * V4L2_HEVC_DPB_ENTRIES_NUM_MAX);
3081 + // Locate ref frames
3082 + // At least in the current implementation this is constant across all
3083 + // slices. If this changes we will need idx mapping code.
3084 + // Uses sh so here rather than trigger
3086 + vq = v4l2_m2m_get_vq(ctx->fh.m2m_ctx,
3087 + V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE);
3090 + v4l2_err(&dev->v4l2_dev, "VQ gone!\n");
3094 + // v4l2_info(&dev->v4l2_dev, "rpivid_h265_end of frame\n");
3095 + if (write_cmd_buffer(dev, de, s))
3098 + for (i = 0; i < dec->num_active_dpb_entries; ++i) {
3099 + struct vb2_buffer *buf = vb2_find_buffer(vq, dec->dpb[i].timestamp);
3101 + v4l2_warn(&dev->v4l2_dev,
3102 + "Missing DPB ent %d, timestamp=%lld\n",
3103 + i, (long long)dec->dpb[i].timestamp);
3108 + int buffer_index = buf->index;
3109 + dpb_q_aux[i] = aux_q_ref_idx(ctx, buffer_index);
3110 + if (!dpb_q_aux[i])
3111 + v4l2_warn(&dev->v4l2_dev,
3112 + "Missing DPB AUX ent %d, timestamp=%lld, index=%d\n",
3113 + i, (long long)dec->dpb[i].timestamp,
3117 + de->ref_addrs[i] =
3118 + vb2_dma_contig_plane_dma_addr(buf, 0);
3121 + // Move DPB from temp
3122 + for (i = 0; i != V4L2_HEVC_DPB_ENTRIES_NUM_MAX; ++i) {
3123 + aux_q_release(ctx, &s->ref_aux[i]);
3124 + s->ref_aux[i] = dpb_q_aux[i];
3126 + // Unref the old frame aux too - it is either in the DPB or not
3128 + aux_q_release(ctx, &s->frame_aux);
3131 + s->frame_aux = aux_q_new(ctx, run->dst->vb2_buf.index);
3133 + if (!s->frame_aux) {
3134 + v4l2_err(&dev->v4l2_dev,
3135 + "Failed to obtain aux storage for frame\n");
3139 + de->frame_aux = aux_q_ref(ctx, s->frame_aux);
3142 + if (de->dpbno_col != ~0U) {
3143 + if (de->dpbno_col >= dec->num_active_dpb_entries) {
3144 + v4l2_err(&dev->v4l2_dev,
3145 + "Col ref index %d >= %d\n",
3147 + dec->num_active_dpb_entries);
3149 + // Standard requires that the col pic is
3150 + // constant for the duration of the pic
3151 + // (text of collocated_ref_idx in H265-2 2018
3154 + // Spot the collocated ref in passing
3155 + de->col_aux = aux_q_ref(ctx,
3156 + dpb_q_aux[de->dpbno_col]);
3158 + if (!de->col_aux) {
3159 + v4l2_warn(&dev->v4l2_dev,
3160 + "Missing DPB ent for col\n");
3161 + // Probably need to abort if this fails
3162 + // as P2 may explode on bad data
3168 + de->state = RPIVID_DECODE_PHASE1;
3169 + xtrace_ok(dev, de);
3174 + // Actual error reporting happens in Trigger
3175 + de->state = frame_end ? RPIVID_DECODE_ERROR_DONE :
3176 + RPIVID_DECODE_ERROR_CONTINUE;
3177 + xtrace_fail(dev, de);
3180 +//////////////////////////////////////////////////////////////////////////////
3181 +// Handle PU and COEFF stream overflow
3184 +// -1 Phase 1 decode error
3186 +// >0 Out of space (bitmask)
3188 +#define STATUS_COEFF_EXHAUSTED 8
3189 +#define STATUS_PU_EXHAUSTED 16
3191 +static int check_status(const struct rpivid_dev *const dev)
3193 + const u32 cfstatus = apb_read(dev, RPI_CFSTATUS);
3194 + const u32 cfnum = apb_read(dev, RPI_CFNUM);
3195 + u32 status = apb_read(dev, RPI_STATUS);
3197 + // Handle PU and COEFF stream overflow
3199 + // this is the definition of successful completion of phase 1
3200 + // it assures that status register is zero and all blocks in each tile
3202 + if (cfstatus == cfnum)
3203 + return 0; //No error
3205 + status &= (STATUS_PU_EXHAUSTED | STATUS_COEFF_EXHAUSTED);
3212 +static void phase2_cb(struct rpivid_dev *const dev, void *v)
3214 + struct rpivid_dec_env *const de = v;
3216 + xtrace_in(dev, de);
3218 + /* Done with buffers - allow new P1 */
3219 + rpivid_hw_irq_active1_enable_claim(dev, 1);
3221 + v4l2_m2m_buf_done(de->frame_buf, VB2_BUF_STATE_DONE);
3222 + de->frame_buf = NULL;
3224 +#if USE_REQUEST_PIN
3225 + media_request_unpin(de->req_pin);
3226 + de->req_pin = NULL;
3228 + media_request_object_complete(de->req_obj);
3229 + de->req_obj = NULL;
3232 + xtrace_ok(dev, de);
3233 + dec_env_delete(de);
3236 +static void phase2_claimed(struct rpivid_dev *const dev, void *v)
3238 + struct rpivid_dec_env *const de = v;
3241 + xtrace_in(dev, de);
3243 + apb_write_vc_addr(dev, RPI_PURBASE, de->pu_base_vc);
3244 + apb_write_vc_len(dev, RPI_PURSTRIDE, de->pu_stride);
3245 + apb_write_vc_addr(dev, RPI_COEFFRBASE, de->coeff_base_vc);
3246 + apb_write_vc_len(dev, RPI_COEFFRSTRIDE, de->coeff_stride);
3248 + apb_write_vc_addr(dev, RPI_OUTYBASE, de->frame_addr);
3249 + apb_write_vc_addr(dev, RPI_OUTCBASE,
3250 + de->frame_addr + de->frame_c_offset);
3251 + apb_write_vc_len(dev, RPI_OUTYSTRIDE, de->frame_stride);
3252 + apb_write_vc_len(dev, RPI_OUTCSTRIDE, de->frame_stride);
3254 + // v4l2_info(&dev->v4l2_dev, "Frame: Y=%llx, C=%llx, Stride=%x\n",
3255 + // de->frame_addr, de->frame_addr + de->frame_c_offset,
3256 + // de->frame_stride);
3258 + for (i = 0; i < 16; i++) {
3259 + // Strides are in fact unused but fill in anyway
3260 + apb_write_vc_addr(dev, 0x9000 + 16 * i, de->ref_addrs[i]);
3261 + apb_write_vc_len(dev, 0x9004 + 16 * i, de->frame_stride);
3262 + apb_write_vc_addr(dev, 0x9008 + 16 * i,
3263 + de->ref_addrs[i] + de->frame_c_offset);
3264 + apb_write_vc_len(dev, 0x900C + 16 * i, de->frame_stride);
3267 + apb_write(dev, RPI_CONFIG2, de->rpi_config2);
3268 + apb_write(dev, RPI_FRAMESIZE, de->rpi_framesize);
3269 + apb_write(dev, RPI_CURRPOC, de->rpi_currpoc);
3270 + // v4l2_info(&dev->v4l2_dev, "Config2=%#x, FrameSize=%#x, POC=%#x\n",
3271 + // de->rpi_config2, de->rpi_framesize, de->rpi_currpoc);
3273 + // collocated reads/writes
3274 + apb_write_vc_len(dev, RPI_COLSTRIDE,
3275 + de->ctx->colmv_stride); // Read vals
3276 + apb_write_vc_len(dev, RPI_MVSTRIDE,
3277 + de->ctx->colmv_stride); // Write vals
3278 + apb_write_vc_addr(dev, RPI_MVBASE,
3279 + !de->frame_aux ? 0 : de->frame_aux->col.addr);
3280 + apb_write_vc_addr(dev, RPI_COLBASE,
3281 + !de->col_aux ? 0 : de->col_aux->col.addr);
3283 + //v4l2_info(&dev->v4l2_dev,
3284 + // "Mv=%llx, Col=%llx, Stride=%x, Buf=%llx->%llx\n",
3285 + // de->rpi_mvbase, de->rpi_colbase, de->ctx->colmv_stride,
3286 + // de->ctx->colmvbuf.addr, de->ctx->colmvbuf.addr +
3287 + // de->ctx->colmvbuf.size);
3289 + rpivid_hw_irq_active2_irq(dev, &de->irq_ent, phase2_cb, de);
3291 + apb_write_final(dev, RPI_NUMROWS, de->pic_height_in_ctbs_y);
3293 + xtrace_ok(dev, de);
3296 +static void phase1_claimed(struct rpivid_dev *const dev, void *v);
3298 +// release any and all objects associated with de
3299 +// and reenable phase 1 if required
3300 +static void phase1_err_fin(struct rpivid_dev *const dev,
3301 + struct rpivid_ctx *const ctx,
3302 + struct rpivid_dec_env *const de)
3304 + /* Return all detached buffers */
3306 + v4l2_m2m_buf_done(de->src_buf, VB2_BUF_STATE_ERROR);
3307 + de->src_buf = NULL;
3308 + if (de->frame_buf)
3309 + v4l2_m2m_buf_done(de->frame_buf, VB2_BUF_STATE_ERROR);
3310 + de->frame_buf = NULL;
3311 +#if USE_REQUEST_PIN
3313 + media_request_unpin(de->req_pin);
3314 + de->req_pin = NULL;
3317 + media_request_object_complete(de->req_obj);
3318 + de->req_obj = NULL;
3321 + dec_env_delete(de);
3323 + /* Reenable phase 0 if we were blocking */
3324 + if (atomic_add_return(-1, &ctx->p1out) >= RPIVID_P1BUF_COUNT - 1)
3325 + v4l2_m2m_job_finish(dev->m2m_dev, ctx->fh.m2m_ctx);
3327 + /* Done with P1-P2 buffers - allow new P1 */
3328 + rpivid_hw_irq_active1_enable_claim(dev, 1);
3331 +static void phase1_thread(struct rpivid_dev *const dev, void *v)
3333 + struct rpivid_dec_env *const de = v;
3334 + struct rpivid_ctx *const ctx = de->ctx;
3336 + struct rpivid_gptr *const pu_gptr = ctx->pu_bufs + ctx->p2idx;
3337 + struct rpivid_gptr *const coeff_gptr = ctx->coeff_bufs + ctx->p2idx;
3339 + xtrace_in(dev, de);
3341 + if (de->p1_status & STATUS_PU_EXHAUSTED) {
3342 + if (gptr_realloc_new(dev, pu_gptr, next_size(pu_gptr->size))) {
3343 + v4l2_err(&dev->v4l2_dev,
3344 + "%s: PU realloc (%zx) failed\n",
3345 + __func__, pu_gptr->size);
3348 + v4l2_info(&dev->v4l2_dev, "%s: PU realloc (%zx) OK\n",
3349 + __func__, pu_gptr->size);
3352 + if (de->p1_status & STATUS_COEFF_EXHAUSTED) {
3353 + if (gptr_realloc_new(dev, coeff_gptr,
3354 + next_size(coeff_gptr->size))) {
3355 + v4l2_err(&dev->v4l2_dev,
3356 + "%s: Coeff realloc (%zx) failed\n",
3357 + __func__, coeff_gptr->size);
3360 + v4l2_info(&dev->v4l2_dev, "%s: Coeff realloc (%zx) OK\n",
3361 + __func__, coeff_gptr->size);
3364 + phase1_claimed(dev, de);
3365 + xtrace_ok(dev, de);
3369 + if (!pu_gptr->addr || !coeff_gptr->addr) {
3370 + v4l2_err(&dev->v4l2_dev,
3371 + "%s: Fatal: failed to reclaim old alloc\n",
3373 + ctx->fatal_err = 1;
3375 + xtrace_fail(dev, de);
3376 + phase1_err_fin(dev, ctx, de);
3379 +/* Always called in irq context (this is good) */
3380 +static void phase1_cb(struct rpivid_dev *const dev, void *v)
3382 + struct rpivid_dec_env *const de = v;
3383 + struct rpivid_ctx *const ctx = de->ctx;
3385 + xtrace_in(dev, de);
3387 + de->p1_status = check_status(dev);
3389 + if (de->p1_status != 0) {
3390 + v4l2_info(&dev->v4l2_dev, "%s: Post wait: %#x\n",
3391 + __func__, de->p1_status);
3393 + if (de->p1_status < 0)
3396 + /* Need to realloc - push onto a thread rather than IRQ */
3397 + rpivid_hw_irq_active1_thread(dev, &de->irq_ent,
3398 + phase1_thread, de);
3402 + v4l2_m2m_buf_done(de->src_buf, VB2_BUF_STATE_DONE);
3403 + de->src_buf = NULL;
3405 + /* All phase1 error paths done - it is safe to inc p2idx */
3407 + (ctx->p2idx + 1 >= RPIVID_P2BUF_COUNT) ? 0 : ctx->p2idx + 1;
3409 + /* Renable the next setup if we were blocking */
3410 + if (atomic_add_return(-1, &ctx->p1out) >= RPIVID_P1BUF_COUNT - 1) {
3411 + xtrace_fin(dev, de);
3412 + v4l2_m2m_job_finish(dev->m2m_dev, ctx->fh.m2m_ctx);
3415 + rpivid_hw_irq_active2_claim(dev, &de->irq_ent, phase2_claimed, de);
3417 + xtrace_ok(dev, de);
3421 + xtrace_fail(dev, de);
3422 + phase1_err_fin(dev, ctx, de);
3425 +static void phase1_claimed(struct rpivid_dev *const dev, void *v)
3427 + struct rpivid_dec_env *const de = v;
3428 + struct rpivid_ctx *const ctx = de->ctx;
3430 + const struct rpivid_gptr * const pu_gptr = ctx->pu_bufs + ctx->p2idx;
3431 + const struct rpivid_gptr * const coeff_gptr = ctx->coeff_bufs +
3434 + xtrace_in(dev, de);
3436 + if (ctx->fatal_err)
3439 + de->pu_base_vc = pu_gptr->addr;
3441 + ALIGN_DOWN(pu_gptr->size / de->pic_height_in_ctbs_y, 64);
3443 + de->coeff_base_vc = coeff_gptr->addr;
3444 + de->coeff_stride =
3445 + ALIGN_DOWN(coeff_gptr->size / de->pic_height_in_ctbs_y, 64);
3447 + /* phase1_claimed blocked until cb_phase1 completed so p2idx inc
3448 + * in cb_phase1 after error detection
3451 + apb_write_vc_addr(dev, RPI_PUWBASE, de->pu_base_vc);
3452 + apb_write_vc_len(dev, RPI_PUWSTRIDE, de->pu_stride);
3453 + apb_write_vc_addr(dev, RPI_COEFFWBASE, de->coeff_base_vc);
3454 + apb_write_vc_len(dev, RPI_COEFFWSTRIDE, de->coeff_stride);
3456 + // Trigger command FIFO
3457 + apb_write(dev, RPI_CFNUM, de->cmd_len);
3460 + rpivid_hw_irq_active1_irq(dev, &de->irq_ent, phase1_cb, de);
3462 + // And start the h/w
3463 + apb_write_vc_addr_final(dev, RPI_CFBASE, de->cmd_addr);
3465 + xtrace_ok(dev, de);
3469 + xtrace_fail(dev, de);
3470 + phase1_err_fin(dev, ctx, de);
3473 +static void dec_state_delete(struct rpivid_ctx *const ctx)
3476 + struct rpivid_dec_state *const s = ctx->state;
3480 + ctx->state = NULL;
3484 + for (i = 0; i != HEVC_MAX_REFS; ++i)
3485 + aux_q_release(ctx, &s->ref_aux[i]);
3486 + aux_q_release(ctx, &s->frame_aux);
3493 + wait_queue_head_t wq;
3494 + struct rpivid_hw_irq_ent irq_ent;
3497 +static void phase2_sync_claimed(struct rpivid_dev *const dev, void *v)
3499 + struct irq_sync *const sync = v;
3501 + atomic_set(&sync->done, 1);
3502 + wake_up(&sync->wq);
3505 +static void phase1_sync_claimed(struct rpivid_dev *const dev, void *v)
3507 + struct irq_sync *const sync = v;
3509 + rpivid_hw_irq_active1_enable_claim(dev, 1);
3510 + rpivid_hw_irq_active2_claim(dev, &sync->irq_ent, phase2_sync_claimed, sync);
3513 +/* Sync with IRQ operations
3515 + * Claims phase1 and phase2 in turn and waits for the phase2 claim so any
3516 + * pending IRQ ops will have completed by the time this returns
3518 + * phase1 has counted enables so must reenable once claimed
3519 + * phase2 has unlimited enables
3521 +static void irq_sync(struct rpivid_dev *const dev)
3523 + struct irq_sync sync;
3525 + atomic_set(&sync.done, 0);
3526 + init_waitqueue_head(&sync.wq);
3528 + rpivid_hw_irq_active1_claim(dev, &sync.irq_ent, phase1_sync_claimed, &sync);
3529 + wait_event(sync.wq, atomic_read(&sync.done));
3532 +static void h265_ctx_uninit(struct rpivid_dev *const dev, struct rpivid_ctx *ctx)
3536 + dec_env_uninit(ctx);
3537 + dec_state_delete(ctx);
3539 + // dec_env & state must be killed before this to release the buffer to
3541 + aux_q_uninit(ctx);
3543 + for (i = 0; i != ARRAY_SIZE(ctx->bitbufs); ++i)
3544 + gptr_free(dev, ctx->bitbufs + i);
3545 + for (i = 0; i != ARRAY_SIZE(ctx->pu_bufs); ++i)
3546 + gptr_free(dev, ctx->pu_bufs + i);
3547 + for (i = 0; i != ARRAY_SIZE(ctx->coeff_bufs); ++i)
3548 + gptr_free(dev, ctx->coeff_bufs + i);
3551 +static void rpivid_h265_stop(struct rpivid_ctx *ctx)
3553 + struct rpivid_dev *const dev = ctx->dev;
3555 + v4l2_info(&dev->v4l2_dev, "%s\n", __func__);
3558 + h265_ctx_uninit(dev, ctx);
3561 +static int rpivid_h265_start(struct rpivid_ctx *ctx)
3563 + struct rpivid_dev *const dev = ctx->dev;
3566 + unsigned int w = ctx->dst_fmt.width;
3567 + unsigned int h = ctx->dst_fmt.height;
3570 + size_t coeff_alloc;
3572 +#if DEBUG_TRACE_P1_CMD
3576 + // Generate a sanitised WxH for memory alloc
3577 + // Assume HD if unset
3588 + v4l2_info(&dev->v4l2_dev, "%s: (%dx%d)\n", __func__,
3589 + ctx->dst_fmt.width, ctx->dst_fmt.height);
3591 + ctx->fatal_err = 0;
3593 + ctx->state = kzalloc(sizeof(*ctx->state), GFP_KERNEL);
3594 + if (!ctx->state) {
3595 + v4l2_err(&dev->v4l2_dev, "Failed to allocate decode state\n");
3599 + if (dec_env_init(ctx) != 0) {
3600 + v4l2_err(&dev->v4l2_dev, "Failed to allocate decode envs\n");
3604 + // Finger in the air PU & Coeff alloc
3605 + // Will be realloced if too small
3606 + coeff_alloc = rpivid_round_up_size(wxh);
3607 + pu_alloc = rpivid_round_up_size(wxh / 4);
3608 + for (i = 0; i != ARRAY_SIZE(ctx->pu_bufs); ++i) {
3609 + // Don't actually need a kernel mapping here
3610 + if (gptr_alloc(dev, ctx->pu_bufs + i, pu_alloc,
3611 + DMA_ATTR_NO_KERNEL_MAPPING))
3613 + if (gptr_alloc(dev, ctx->coeff_bufs + i, coeff_alloc,
3614 + DMA_ATTR_NO_KERNEL_MAPPING))
3622 + h265_ctx_uninit(dev, ctx);
3626 +static void rpivid_h265_trigger(struct rpivid_ctx *ctx)
3628 + struct rpivid_dev *const dev = ctx->dev;
3629 + struct rpivid_dec_env *const de = ctx->dec0;
3631 + xtrace_in(dev, de);
3633 + switch (!de ? RPIVID_DECODE_ERROR_CONTINUE : de->state) {
3634 + case RPIVID_DECODE_SLICE_START:
3635 + de->state = RPIVID_DECODE_SLICE_CONTINUE;
3637 + case RPIVID_DECODE_SLICE_CONTINUE:
3638 + v4l2_m2m_buf_done_and_job_finish(dev->m2m_dev, ctx->fh.m2m_ctx,
3639 + VB2_BUF_STATE_DONE);
3640 + xtrace_ok(dev, de);
3644 + v4l2_err(&dev->v4l2_dev, "%s: Unexpected state: %d\n", __func__,
3647 + case RPIVID_DECODE_ERROR_DONE:
3649 + dec_env_delete(de);
3651 + case RPIVID_DECODE_ERROR_CONTINUE:
3652 + xtrace_fin(dev, de);
3653 + v4l2_m2m_buf_done_and_job_finish(dev->m2m_dev, ctx->fh.m2m_ctx,
3654 + VB2_BUF_STATE_ERROR);
3657 + case RPIVID_DECODE_PHASE1:
3660 +#if !USE_REQUEST_PIN
3661 + /* Alloc a new request object - needs to be alloced dynamically
3662 + * as the media request will release it some random time after
3665 + de->req_obj = kmalloc(sizeof(*de->req_obj), GFP_KERNEL);
3666 + if (!de->req_obj) {
3667 + xtrace_fail(dev, de);
3668 + dec_env_delete(de);
3669 + v4l2_m2m_buf_done_and_job_finish(dev->m2m_dev,
3671 + VB2_BUF_STATE_ERROR);
3674 + media_request_object_init(de->req_obj);
3675 +#warning probably needs to _get the req obj too
3677 + ctx->p1idx = (ctx->p1idx + 1 >= RPIVID_P1BUF_COUNT) ?
3678 + 0 : ctx->p1idx + 1;
3680 + /* We know we have src & dst so no need to test */
3681 + de->src_buf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
3682 + de->frame_buf = v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx);
3684 +#if USE_REQUEST_PIN
3685 + de->req_pin = de->src_buf->vb2_buf.req_obj.req;
3686 + media_request_pin(de->req_pin);
3688 + media_request_object_bind(de->src_buf->vb2_buf.req_obj.req,
3689 + &dst_req_obj_ops, de, false,
3693 + /* We could get rid of the src buffer here if we've already
3694 + * copied it, but we don't copy the last buffer unless it
3695 + * didn't return a contig dma addr and that shouldn't happen
3698 + /* Enable the next setup if our Q isn't too big */
3699 + if (atomic_add_return(1, &ctx->p1out) < RPIVID_P1BUF_COUNT) {
3700 + xtrace_fin(dev, de);
3701 + v4l2_m2m_job_finish(dev->m2m_dev, ctx->fh.m2m_ctx);
3704 + rpivid_hw_irq_active1_claim(dev, &de->irq_ent, phase1_claimed,
3706 + xtrace_ok(dev, de);
3711 +const struct rpivid_dec_ops rpivid_dec_ops_h265 = {
3712 + .setup = rpivid_h265_setup,
3713 + .start = rpivid_h265_start,
3714 + .stop = rpivid_h265_stop,
3715 + .trigger = rpivid_h265_trigger,
3718 +static int try_ctrl_sps(struct v4l2_ctrl *ctrl)
3720 + const struct v4l2_ctrl_hevc_sps *const sps = ctrl->p_new.p_hevc_sps;
3721 + struct rpivid_ctx *const ctx = ctrl->priv;
3722 + struct rpivid_dev *const dev = ctx->dev;
3724 + if (sps->chroma_format_idc != 1) {
3725 + v4l2_warn(&dev->v4l2_dev,
3726 + "Chroma format (%d) unsupported\n",
3727 + sps->chroma_format_idc);
3731 + if (sps->bit_depth_luma_minus8 != 0 &&
3732 + sps->bit_depth_luma_minus8 != 2) {
3733 + v4l2_warn(&dev->v4l2_dev,
3734 + "Luma depth (%d) unsupported\n",
3735 + sps->bit_depth_luma_minus8 + 8);
3739 + if (sps->bit_depth_luma_minus8 != sps->bit_depth_chroma_minus8) {
3740 + v4l2_warn(&dev->v4l2_dev,
3741 + "Chroma depth (%d) != Luma depth (%d)\n",
3742 + sps->bit_depth_chroma_minus8 + 8,
3743 + sps->bit_depth_luma_minus8 + 8);
3747 + if (!sps->pic_width_in_luma_samples ||
3748 + !sps->pic_height_in_luma_samples ||
3749 + sps->pic_width_in_luma_samples > 4096 ||
3750 + sps->pic_height_in_luma_samples > 4096) {
3751 + v4l2_warn(&dev->v4l2_dev,
3752 + "Bad sps width (%u) x height (%u)\n",
3753 + sps->pic_width_in_luma_samples,
3754 + sps->pic_height_in_luma_samples);
3758 + if (!ctx->dst_fmt_set)
3761 + if ((sps->bit_depth_luma_minus8 == 0 &&
3762 + ctx->dst_fmt.pixelformat != V4L2_PIX_FMT_NV12_COL128) ||
3763 + (sps->bit_depth_luma_minus8 == 2 &&
3764 + ctx->dst_fmt.pixelformat != V4L2_PIX_FMT_NV12_10_COL128)) {
3765 + v4l2_warn(&dev->v4l2_dev,
3766 + "SPS luma depth %d does not match capture format\n",
3767 + sps->bit_depth_luma_minus8 + 8);
3771 + if (sps->pic_width_in_luma_samples > ctx->dst_fmt.width ||
3772 + sps->pic_height_in_luma_samples > ctx->dst_fmt.height) {
3773 + v4l2_warn(&dev->v4l2_dev,
3774 + "SPS size (%dx%d) > capture size (%d,%d)\n",
3775 + sps->pic_width_in_luma_samples,
3776 + sps->pic_height_in_luma_samples,
3777 + ctx->dst_fmt.width,
3778 + ctx->dst_fmt.height);
3785 +const struct v4l2_ctrl_ops rpivid_hevc_sps_ctrl_ops = {
3786 + .try_ctrl = try_ctrl_sps,
3789 +static int try_ctrl_pps(struct v4l2_ctrl *ctrl)
3791 + const struct v4l2_ctrl_hevc_pps *const pps = ctrl->p_new.p_hevc_pps;
3792 + struct rpivid_ctx *const ctx = ctrl->priv;
3793 + struct rpivid_dev *const dev = ctx->dev;
3796 + V4L2_HEVC_PPS_FLAG_ENTROPY_CODING_SYNC_ENABLED) &&
3798 + V4L2_HEVC_PPS_FLAG_TILES_ENABLED) &&
3799 + (pps->num_tile_columns_minus1 || pps->num_tile_rows_minus1)) {
3800 + v4l2_warn(&dev->v4l2_dev,
3801 + "WPP + Tiles not supported\n");
3808 +const struct v4l2_ctrl_ops rpivid_hevc_pps_ctrl_ops = {
3809 + .try_ctrl = try_ctrl_pps,
3813 +++ b/drivers/staging/media/rpivid/rpivid_hw.c
3815 +// SPDX-License-Identifier: GPL-2.0
3817 + * Raspberry Pi HEVC driver
3819 + * Copyright (C) 2020 Raspberry Pi (Trading) Ltd
3821 + * Based on the Cedrus VPU driver, that is:
3823 + * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
3824 + * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
3825 + * Copyright (C) 2018 Bootlin
3827 +#include <linux/clk.h>
3828 +#include <linux/component.h>
3829 +#include <linux/dma-mapping.h>
3830 +#include <linux/interrupt.h>
3831 +#include <linux/io.h>
3832 +#include <linux/of_reserved_mem.h>
3833 +#include <linux/of_device.h>
3834 +#include <linux/of_platform.h>
3835 +#include <linux/platform_device.h>
3836 +#include <linux/regmap.h>
3837 +#include <linux/reset.h>
3839 +#include <media/videobuf2-core.h>
3840 +#include <media/v4l2-mem2mem.h>
3842 +#include <soc/bcm2835/raspberrypi-firmware.h>
3844 +#include "rpivid.h"
3845 +#include "rpivid_hw.h"
3847 +static void pre_irq(struct rpivid_dev *dev, struct rpivid_hw_irq_ent *ient,
3848 + rpivid_irq_callback cb, void *v,
3849 + struct rpivid_hw_irq_ctrl *ictl)
3851 + unsigned long flags;
3854 + v4l2_err(&dev->v4l2_dev, "Attempt to claim IRQ when already claimed\n");
3861 + spin_lock_irqsave(&ictl->lock, flags);
3864 + spin_unlock_irqrestore(&ictl->lock, flags);
3867 +/* Should be called from inside ictl->lock */
3868 +static inline bool sched_enabled(const struct rpivid_hw_irq_ctrl * const ictl)
3870 + return ictl->no_sched <= 0 && ictl->enable;
3873 +/* Should be called from inside ictl->lock & after checking sched_enabled() */
3874 +static inline void set_claimed(struct rpivid_hw_irq_ctrl * const ictl)
3876 + if (ictl->enable > 0)
3878 + ictl->no_sched = 1;
3881 +/* Should be called from inside ictl->lock */
3882 +static struct rpivid_hw_irq_ent *get_sched(struct rpivid_hw_irq_ctrl * const ictl)
3884 + struct rpivid_hw_irq_ent *ient;
3886 + if (!sched_enabled(ictl))
3889 + ient = ictl->claim;
3892 + ictl->claim = ient->next;
3894 + set_claimed(ictl);
3898 +/* Run a callback & check to see if there is anything else to run */
3899 +static void sched_cb(struct rpivid_dev * const dev,
3900 + struct rpivid_hw_irq_ctrl * const ictl,
3901 + struct rpivid_hw_irq_ent *ient)
3904 + unsigned long flags;
3906 + ient->cb(dev, ient->v);
3908 + spin_lock_irqsave(&ictl->lock, flags);
3910 + /* Always dec no_sched after cb exec - must have been set
3914 + ient = get_sched(ictl);
3916 + spin_unlock_irqrestore(&ictl->lock, flags);
3920 +/* Should only ever be called from its own IRQ cb so no lock required */
3921 +static void pre_thread(struct rpivid_dev *dev,
3922 + struct rpivid_hw_irq_ent *ient,
3923 + rpivid_irq_callback cb, void *v,
3924 + struct rpivid_hw_irq_ctrl *ictl)
3929 + ictl->thread_reqed = true;
3930 + ictl->no_sched++; /* This is unwound in do_thread */
3933 +// Called in irq context
3934 +static void do_irq(struct rpivid_dev * const dev,
3935 + struct rpivid_hw_irq_ctrl * const ictl)
3937 + struct rpivid_hw_irq_ent *ient;
3938 + unsigned long flags;
3940 + spin_lock_irqsave(&ictl->lock, flags);
3943 + spin_unlock_irqrestore(&ictl->lock, flags);
3945 + sched_cb(dev, ictl, ient);
3948 +static void do_claim(struct rpivid_dev * const dev,
3949 + struct rpivid_hw_irq_ent *ient,
3950 + const rpivid_irq_callback cb, void * const v,
3951 + struct rpivid_hw_irq_ctrl * const ictl)
3953 + unsigned long flags;
3955 + ient->next = NULL;
3959 + spin_lock_irqsave(&ictl->lock, flags);
3961 + if (ictl->claim) {
3962 + // If we have a Q then add to end
3963 + ictl->tail->next = ient;
3964 + ictl->tail = ient;
3966 + } else if (!sched_enabled(ictl)) {
3967 + // Empty Q but other activity in progress so Q
3968 + ictl->claim = ient;
3969 + ictl->tail = ient;
3972 + // Nothing else going on - schedule immediately and
3973 + // prevent anything else scheduling claims
3974 + set_claimed(ictl);
3977 + spin_unlock_irqrestore(&ictl->lock, flags);
3979 + sched_cb(dev, ictl, ient);
3982 +/* Enable n claims.
3983 + * n < 0 set to unlimited (default on init)
3984 + * n = 0 if previously unlimited then disable otherwise nop
3985 + * n > 0 if previously unlimited then set to n enables
3986 + * otherwise add n enables
3987 + * The enable count is automatically decremented every time a claim is run
3989 +static void do_enable_claim(struct rpivid_dev * const dev,
3991 + struct rpivid_hw_irq_ctrl * const ictl)
3993 + unsigned long flags;
3994 + struct rpivid_hw_irq_ent *ient;
3996 + spin_lock_irqsave(&ictl->lock, flags);
3997 + ictl->enable = n < 0 ? -1 : ictl->enable <= 0 ? n : ictl->enable + n;
3998 + ient = get_sched(ictl);
3999 + spin_unlock_irqrestore(&ictl->lock, flags);
4001 + sched_cb(dev, ictl, ient);
4004 +static void ictl_init(struct rpivid_hw_irq_ctrl * const ictl, int enables)
4006 + spin_lock_init(&ictl->lock);
4007 + ictl->claim = NULL;
4008 + ictl->tail = NULL;
4010 + ictl->no_sched = 0;
4011 + ictl->enable = enables;
4012 + ictl->thread_reqed = false;
4015 +static void ictl_uninit(struct rpivid_hw_irq_ctrl * const ictl)
4020 +#if !OPT_DEBUG_POLL_IRQ
4021 +static irqreturn_t rpivid_irq_irq(int irq, void *data)
4023 + struct rpivid_dev * const dev = data;
4026 + ictrl = irq_read(dev, ARG_IC_ICTRL);
4027 + if (!(ictrl & ARG_IC_ICTRL_ALL_IRQ_MASK)) {
4028 + v4l2_warn(&dev->v4l2_dev, "IRQ but no IRQ bits set\n");
4032 + // Cancel any/all irqs
4033 + irq_write(dev, ARG_IC_ICTRL, ictrl & ~ARG_IC_ICTRL_SET_ZERO_MASK);
4035 + // Service Active2 before Active1 so Phase 1 can transition to Phase 2
4037 + if (ictrl & ARG_IC_ICTRL_ACTIVE2_INT_SET)
4038 + do_irq(dev, &dev->ic_active2);
4039 + if (ictrl & ARG_IC_ICTRL_ACTIVE1_INT_SET)
4040 + do_irq(dev, &dev->ic_active1);
4042 + return dev->ic_active1.thread_reqed || dev->ic_active2.thread_reqed ?
4043 + IRQ_WAKE_THREAD : IRQ_HANDLED;
4046 +static void do_thread(struct rpivid_dev * const dev,
4047 + struct rpivid_hw_irq_ctrl *const ictl)
4049 + unsigned long flags;
4050 + struct rpivid_hw_irq_ent *ient = NULL;
4052 + spin_lock_irqsave(&ictl->lock, flags);
4054 + if (ictl->thread_reqed) {
4056 + ictl->thread_reqed = false;
4060 + spin_unlock_irqrestore(&ictl->lock, flags);
4062 + sched_cb(dev, ictl, ient);
4065 +static irqreturn_t rpivid_irq_thread(int irq, void *data)
4067 + struct rpivid_dev * const dev = data;
4069 + do_thread(dev, &dev->ic_active1);
4070 + do_thread(dev, &dev->ic_active2);
4072 + return IRQ_HANDLED;
4076 +/* May only be called from Active1 CB
4077 + * IRQs should not be expected until execution continues in the cb
4079 +void rpivid_hw_irq_active1_thread(struct rpivid_dev *dev,
4080 + struct rpivid_hw_irq_ent *ient,
4081 + rpivid_irq_callback thread_cb, void *ctx)
4083 + pre_thread(dev, ient, thread_cb, ctx, &dev->ic_active1);
4086 +void rpivid_hw_irq_active1_enable_claim(struct rpivid_dev *dev,
4089 + do_enable_claim(dev, n, &dev->ic_active1);
4092 +void rpivid_hw_irq_active1_claim(struct rpivid_dev *dev,
4093 + struct rpivid_hw_irq_ent *ient,
4094 + rpivid_irq_callback ready_cb, void *ctx)
4096 + do_claim(dev, ient, ready_cb, ctx, &dev->ic_active1);
4099 +void rpivid_hw_irq_active1_irq(struct rpivid_dev *dev,
4100 + struct rpivid_hw_irq_ent *ient,
4101 + rpivid_irq_callback irq_cb, void *ctx)
4103 + pre_irq(dev, ient, irq_cb, ctx, &dev->ic_active1);
4106 +void rpivid_hw_irq_active2_claim(struct rpivid_dev *dev,
4107 + struct rpivid_hw_irq_ent *ient,
4108 + rpivid_irq_callback ready_cb, void *ctx)
4110 + do_claim(dev, ient, ready_cb, ctx, &dev->ic_active2);
4113 +void rpivid_hw_irq_active2_irq(struct rpivid_dev *dev,
4114 + struct rpivid_hw_irq_ent *ient,
4115 + rpivid_irq_callback irq_cb, void *ctx)
4117 + pre_irq(dev, ient, irq_cb, ctx, &dev->ic_active2);
4120 +int rpivid_hw_probe(struct rpivid_dev *dev)
4122 + struct rpi_firmware *firmware;
4123 + struct device_node *node;
4124 + struct resource *res;
4129 + ictl_init(&dev->ic_active1, RPIVID_P2BUF_COUNT);
4130 + ictl_init(&dev->ic_active2, RPIVID_ICTL_ENABLE_UNLIMITED);
4132 + res = platform_get_resource_byname(dev->pdev, IORESOURCE_MEM, "intc");
4136 + dev->base_irq = devm_ioremap(dev->dev, res->start, resource_size(res));
4137 + if (IS_ERR(dev->base_irq))
4138 + return PTR_ERR(dev->base_irq);
4140 + res = platform_get_resource_byname(dev->pdev, IORESOURCE_MEM, "hevc");
4144 + dev->base_h265 = devm_ioremap(dev->dev, res->start, resource_size(res));
4145 + if (IS_ERR(dev->base_h265))
4146 + return PTR_ERR(dev->base_h265);
4148 + dev->clock = devm_clk_get(&dev->pdev->dev, "hevc");
4149 + if (IS_ERR(dev->clock))
4150 + return PTR_ERR(dev->clock);
4152 + node = rpi_firmware_find_node();
4156 + firmware = rpi_firmware_get(node);
4157 + of_node_put(node);
4159 + return -EPROBE_DEFER;
4161 + dev->max_clock_rate = rpi_firmware_clk_get_max_rate(firmware,
4162 + RPI_FIRMWARE_HEVC_CLK_ID);
4163 + rpi_firmware_put(firmware);
4165 + dev->cache_align = dma_get_cache_alignment();
4167 + // Disable IRQs & reset anything pending
4169 + ARG_IC_ICTRL_ACTIVE1_EN_SET | ARG_IC_ICTRL_ACTIVE2_EN_SET);
4170 + irq_stat = irq_read(dev, 0);
4171 + irq_write(dev, 0, irq_stat);
4173 +#if !OPT_DEBUG_POLL_IRQ
4174 + irq_dec = platform_get_irq(dev->pdev, 0);
4177 + ret = devm_request_threaded_irq(dev->dev, irq_dec,
4179 + rpivid_irq_thread,
4180 + 0, dev_name(dev->dev), dev);
4182 + dev_err(dev->dev, "Failed to request IRQ - %d\n", ret);
4190 +void rpivid_hw_remove(struct rpivid_dev *dev)
4192 + // IRQ auto freed on unload so no need to do it here
4193 + // ioremap auto freed on unload
4194 + ictl_uninit(&dev->ic_active1);
4195 + ictl_uninit(&dev->ic_active2);
4199 +++ b/drivers/staging/media/rpivid/rpivid_hw.h
4201 +/* SPDX-License-Identifier: GPL-2.0 */
4203 + * Raspberry Pi HEVC driver
4205 + * Copyright (C) 2020 Raspberry Pi (Trading) Ltd
4207 + * Based on the Cedrus VPU driver, that is:
4209 + * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
4210 + * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
4211 + * Copyright (C) 2018 Bootlin
4214 +#ifndef _RPIVID_HW_H_
4215 +#define _RPIVID_HW_H_
4217 +struct rpivid_hw_irq_ent {
4218 + struct rpivid_hw_irq_ent *next;
4219 + rpivid_irq_callback cb;
4223 +/* Phase 1 Register offsets */
4228 +#define RPI_SLICE 12
4229 +#define RPI_TILESTART 16
4230 +#define RPI_TILEEND 20
4231 +#define RPI_SLICESTART 24
4232 +#define RPI_MODE 28
4233 +#define RPI_LEFT0 32
4234 +#define RPI_LEFT1 36
4235 +#define RPI_LEFT2 40
4236 +#define RPI_LEFT3 44
4238 +#define RPI_CONTROL 52
4239 +#define RPI_STATUS 56
4240 +#define RPI_VERSION 60
4241 +#define RPI_BFBASE 64
4242 +#define RPI_BFNUM 68
4243 +#define RPI_BFCONTROL 72
4244 +#define RPI_BFSTATUS 76
4245 +#define RPI_PUWBASE 80
4246 +#define RPI_PUWSTRIDE 84
4247 +#define RPI_COEFFWBASE 88
4248 +#define RPI_COEFFWSTRIDE 92
4249 +#define RPI_SLICECMDS 96
4250 +#define RPI_BEGINTILEEND 100
4251 +#define RPI_TRANSFER 104
4252 +#define RPI_CFBASE 108
4253 +#define RPI_CFNUM 112
4254 +#define RPI_CFSTATUS 116
4256 +/* Phase 2 Register offsets */
4258 +#define RPI_PURBASE 0x8000
4259 +#define RPI_PURSTRIDE 0x8004
4260 +#define RPI_COEFFRBASE 0x8008
4261 +#define RPI_COEFFRSTRIDE 0x800C
4262 +#define RPI_NUMROWS 0x8010
4263 +#define RPI_CONFIG2 0x8014
4264 +#define RPI_OUTYBASE 0x8018
4265 +#define RPI_OUTYSTRIDE 0x801C
4266 +#define RPI_OUTCBASE 0x8020
4267 +#define RPI_OUTCSTRIDE 0x8024
4268 +#define RPI_STATUS2 0x8028
4269 +#define RPI_FRAMESIZE 0x802C
4270 +#define RPI_MVBASE 0x8030
4271 +#define RPI_MVSTRIDE 0x8034
4272 +#define RPI_COLBASE 0x8038
4273 +#define RPI_COLSTRIDE 0x803C
4274 +#define RPI_CURRPOC 0x8040
4277 + * Write a general register value
4278 + * Order is unimportant
4280 +static inline void apb_write(const struct rpivid_dev * const dev,
4281 + const unsigned int offset, const u32 val)
4283 + writel_relaxed(val, dev->base_h265 + offset);
4286 +/* Write the final register value that actually starts the phase */
4287 +static inline void apb_write_final(const struct rpivid_dev * const dev,
4288 + const unsigned int offset, const u32 val)
4290 + writel(val, dev->base_h265 + offset);
4293 +static inline u32 apb_read(const struct rpivid_dev * const dev,
4294 + const unsigned int offset)
4296 + return readl(dev->base_h265 + offset);
4299 +static inline void irq_write(const struct rpivid_dev * const dev,
4300 + const unsigned int offset, const u32 val)
4302 + writel(val, dev->base_irq + offset);
4305 +static inline u32 irq_read(const struct rpivid_dev * const dev,
4306 + const unsigned int offset)
4308 + return readl(dev->base_irq + offset);
4311 +static inline void apb_write_vc_addr(const struct rpivid_dev * const dev,
4312 + const unsigned int offset,
4313 + const dma_addr_t a)
4315 + apb_write(dev, offset, (u32)(a >> 6));
4318 +static inline void apb_write_vc_addr_final(const struct rpivid_dev * const dev,
4319 + const unsigned int offset,
4320 + const dma_addr_t a)
4322 + apb_write_final(dev, offset, (u32)(a >> 6));
4325 +static inline void apb_write_vc_len(const struct rpivid_dev * const dev,
4326 + const unsigned int offset,
4327 + const unsigned int x)
4329 + apb_write(dev, offset, (x + 63) >> 6);
4332 +/* *ARG_IC_ICTRL - Interrupt control for ARGON Core*
4333 + * Offset (byte space) = 40'h2b10000
4334 + * Physical Address (byte space) = 40'h7eb10000
4335 + * Verilog Macro Address = `ARG_IC_REG_START + `ARGON_INTCTRL_ICTRL
4336 + * Reset Value = 32'b100x100x_100xxxxx_xxxxxxx0_x100x100
4337 + * Access = RW (32-bit only)
4338 + * Interrupt control logic for ARGON Core.
4340 +#define ARG_IC_ICTRL 0
4342 +/* acc=LWC ACTIVE1_INT FIELD ACCESS: LWC
4345 + * This is set and held when an hevc_active1 interrupt edge is detected
4346 + * The polarity of the edge is set by the ACTIVE1_EDGE field
4347 + * Write a 1 to this bit to clear down the latched interrupt
4348 + * The latched interrupt is only enabled out onto the interrupt line if
4349 + * ACTIVE1_EN is set
4350 + * Reset value is *0* decimal.
4352 +#define ARG_IC_ICTRL_ACTIVE1_INT_SET BIT(0)
4354 +/* ACTIVE1_EDGE Sets the polarity of the interrupt edge detection logic
4355 + * This logic detects edges of the hevc_active1 line from the argon core
4356 + * 0 = negedge, 1 = posedge
4357 + * Reset value is *0* decimal.
4359 +#define ARG_IC_ICTRL_ACTIVE1_EDGE_SET BIT(1)
4361 +/* ACTIVE1_EN Enables ACTIVE1_INT out onto the argon interrupt line.
4362 + * If this isn't set, the interrupt logic will work but no interrupt will be
4363 + * set to the interrupt controller
4364 + * Reset value is *1* decimal.
4366 + * [JC] The above appears to be a lie - if unset then b0 is never set
4368 +#define ARG_IC_ICTRL_ACTIVE1_EN_SET BIT(2)
4370 +/* acc=RO ACTIVE1_STATUS FIELD ACCESS: RO
4372 + * The current status of the hevc_active1 signal
4374 +#define ARG_IC_ICTRL_ACTIVE1_STATUS_SET BIT(3)
4376 +/* acc=LWC ACTIVE2_INT FIELD ACCESS: LWC
4379 + * This is set and held when an hevc_active2 interrupt edge is detected
4380 + * The polarity of the edge is set by the ACTIVE2_EDGE field
4381 + * Write a 1 to this bit to clear down the latched interrupt
4382 + * The latched interrupt is only enabled out onto the interrupt line if
4383 + * ACTIVE2_EN is set
4384 + * Reset value is *0* decimal.
4386 +#define ARG_IC_ICTRL_ACTIVE2_INT_SET BIT(4)
4388 +/* ACTIVE2_EDGE Sets the polarity of the interrupt edge detection logic
4389 + * This logic detects edges of the hevc_active2 line from the argon core
4390 + * 0 = negedge, 1 = posedge
4391 + * Reset value is *0* decimal.
4393 +#define ARG_IC_ICTRL_ACTIVE2_EDGE_SET BIT(5)
4395 +/* ACTIVE2_EN Enables ACTIVE2_INT out onto the argon interrupt line.
4396 + * If this isn't set, the interrupt logic will work but no interrupt will be
4397 + * set to the interrupt controller
4398 + * Reset value is *1* decimal.
4400 +#define ARG_IC_ICTRL_ACTIVE2_EN_SET BIT(6)
4402 +/* acc=RO ACTIVE2_STATUS FIELD ACCESS: RO
4404 + * The current status of the hevc_active2 signal
4406 +#define ARG_IC_ICTRL_ACTIVE2_STATUS_SET BIT(7)
4408 +/* TEST_INT Forces the argon int high for test purposes.
4409 + * Reset value is *0* decimal.
4411 +#define ARG_IC_ICTRL_TEST_INT BIT(8)
4412 +#define ARG_IC_ICTRL_SPARE BIT(9)
4414 +/* acc=RO VP9_INTERRUPT_STATUS FIELD ACCESS: RO
4416 + * The current status of the vp9_interrupt signal
4418 +#define ARG_IC_ICTRL_VP9_INTERRUPT_STATUS BIT(10)
4420 +/* AIO_INT_ENABLE 1 = Or the AIO int in with the Argon int so the VPU can see
4422 + * 0 = the AIO int is masked. (It should still be connected to the GIC though).
4424 +#define ARG_IC_ICTRL_AIO_INT_ENABLE BIT(20)
4425 +#define ARG_IC_ICTRL_H264_ACTIVE_INT BIT(21)
4426 +#define ARG_IC_ICTRL_H264_ACTIVE_EDGE BIT(22)
4427 +#define ARG_IC_ICTRL_H264_ACTIVE_EN BIT(23)
4428 +#define ARG_IC_ICTRL_H264_ACTIVE_STATUS BIT(24)
4429 +#define ARG_IC_ICTRL_H264_INTERRUPT_INT BIT(25)
4430 +#define ARG_IC_ICTRL_H264_INTERRUPT_EDGE BIT(26)
4431 +#define ARG_IC_ICTRL_H264_INTERRUPT_EN BIT(27)
4433 +/* acc=RO H264_INTERRUPT_STATUS FIELD ACCESS: RO
4435 + * The current status of the h264_interrupt signal
4437 +#define ARG_IC_ICTRL_H264_INTERRUPT_STATUS BIT(28)
4439 +/* acc=LWC VP9_INTERRUPT_INT FIELD ACCESS: LWC
4442 + * This is set and held when an vp9_interrupt interrupt edge is detected
4443 + * The polarity of the edge is set by the VP9_INTERRUPT_EDGE field
4444 + * Write a 1 to this bit to clear down the latched interrupt
4445 + * The latched interrupt is only enabled out onto the interrupt line if
4446 + * VP9_INTERRUPT_EN is set
4447 + * Reset value is *0* decimal.
4449 +#define ARG_IC_ICTRL_VP9_INTERRUPT_INT BIT(29)
4451 +/* VP9_INTERRUPT_EDGE Sets the polarity of the interrupt edge detection logic
4452 + * This logic detects edges of the vp9_interrupt line from the argon h264 core
4453 + * 0 = negedge, 1 = posedge
4454 + * Reset value is *0* decimal.
4456 +#define ARG_IC_ICTRL_VP9_INTERRUPT_EDGE BIT(30)
4458 +/* VP9_INTERRUPT_EN Enables VP9_INTERRUPT_INT out onto the argon interrupt line.
4459 + * If this isn't set, the interrupt logic will work but no interrupt will be
4460 + * set to the interrupt controller
4461 + * Reset value is *1* decimal.
4463 +#define ARG_IC_ICTRL_VP9_INTERRUPT_EN BIT(31)
4465 +/* Bits 19:12, 11 reserved - read ?, write 0 */
4466 +#define ARG_IC_ICTRL_SET_ZERO_MASK ((0xff << 12) | BIT(11))
4469 +#define ARG_IC_ICTRL_ALL_IRQ_MASK (\
4470 + ARG_IC_ICTRL_VP9_INTERRUPT_INT |\
4471 + ARG_IC_ICTRL_H264_INTERRUPT_INT |\
4472 + ARG_IC_ICTRL_ACTIVE1_INT_SET |\
4473 + ARG_IC_ICTRL_ACTIVE2_INT_SET)
4475 +/* Regulate claim Q */
4476 +void rpivid_hw_irq_active1_enable_claim(struct rpivid_dev *dev,
4478 +/* Auto release once all CBs called */
4479 +void rpivid_hw_irq_active1_claim(struct rpivid_dev *dev,
4480 + struct rpivid_hw_irq_ent *ient,
4481 + rpivid_irq_callback ready_cb, void *ctx);
4482 +/* May only be called in claim cb */
4483 +void rpivid_hw_irq_active1_irq(struct rpivid_dev *dev,
4484 + struct rpivid_hw_irq_ent *ient,
4485 + rpivid_irq_callback irq_cb, void *ctx);
4486 +/* May only be called in irq cb */
4487 +void rpivid_hw_irq_active1_thread(struct rpivid_dev *dev,
4488 + struct rpivid_hw_irq_ent *ient,
4489 + rpivid_irq_callback thread_cb, void *ctx);
4491 +/* Auto release once all CBs called */
4492 +void rpivid_hw_irq_active2_claim(struct rpivid_dev *dev,
4493 + struct rpivid_hw_irq_ent *ient,
4494 + rpivid_irq_callback ready_cb, void *ctx);
4495 +/* May only be called in claim cb */
4496 +void rpivid_hw_irq_active2_irq(struct rpivid_dev *dev,
4497 + struct rpivid_hw_irq_ent *ient,
4498 + rpivid_irq_callback irq_cb, void *ctx);
4500 +int rpivid_hw_probe(struct rpivid_dev *dev);
4501 +void rpivid_hw_remove(struct rpivid_dev *dev);
4505 +++ b/drivers/staging/media/rpivid/rpivid_video.c
4507 +// SPDX-License-Identifier: GPL-2.0
4509 + * Raspberry Pi HEVC driver
4511 + * Copyright (C) 2020 Raspberry Pi (Trading) Ltd
4513 + * Based on the Cedrus VPU driver, that is:
4515 + * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
4516 + * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
4517 + * Copyright (C) 2018 Bootlin
4520 +#include <media/videobuf2-dma-contig.h>
4521 +#include <media/v4l2-device.h>
4522 +#include <media/v4l2-ioctl.h>
4523 +#include <media/v4l2-event.h>
4524 +#include <media/v4l2-mem2mem.h>
4526 +#include "rpivid.h"
4527 +#include "rpivid_hw.h"
4528 +#include "rpivid_video.h"
4529 +#include "rpivid_dec.h"
4531 +#define RPIVID_DECODE_SRC BIT(0)
4532 +#define RPIVID_DECODE_DST BIT(1)
4534 +#define RPIVID_MIN_WIDTH 16U
4535 +#define RPIVID_MIN_HEIGHT 16U
4536 +#define RPIVID_DEFAULT_WIDTH 1920U
4537 +#define RPIVID_DEFAULT_HEIGHT 1088U
4538 +#define RPIVID_MAX_WIDTH 4096U
4539 +#define RPIVID_MAX_HEIGHT 4096U
4541 +static inline struct rpivid_ctx *rpivid_file2ctx(struct file *file)
4543 + return container_of(file->private_data, struct rpivid_ctx, fh);
4546 +/* constrain x to y,y*2 */
4547 +static inline unsigned int constrain2x(unsigned int x, unsigned int y)
4551 + (x > y * 2) ? y : x;
4554 +size_t rpivid_round_up_size(const size_t x)
4556 + /* Admit no size < 256 */
4557 + const unsigned int n = x < 256 ? 8 : ilog2(x);
4559 + return x >= (3 << n) ? 4 << n : (3 << n);
4562 +size_t rpivid_bit_buf_size(unsigned int w, unsigned int h, unsigned int bits_minus8)
4564 + const size_t wxh = w * h;
4565 + size_t bits_alloc;
4567 + /* Annex A gives a min compression of 2 @ lvl 3.1
4568 + * (wxh <= 983040) and min 4 thereafter but avoid
4569 + * the odity of 983041 having a lower limit than
4571 + * Multiply by 3/2 for 4:2:0
4573 + bits_alloc = wxh < 983040 ? wxh * 3 / 4 :
4574 + wxh < 983040 * 2 ? 983040 * 3 / 4 :
4576 + /* Allow for bit depth */
4577 + bits_alloc += (bits_alloc * bits_minus8) / 8;
4578 + return rpivid_round_up_size(bits_alloc);
4581 +void rpivid_prepare_src_format(struct v4l2_pix_format_mplane *pix_fmt)
4587 + w = pix_fmt->width;
4588 + h = pix_fmt->height;
4590 + w = RPIVID_DEFAULT_WIDTH;
4591 + h = RPIVID_DEFAULT_HEIGHT;
4593 + if (w > RPIVID_MAX_WIDTH)
4594 + w = RPIVID_MAX_WIDTH;
4595 + if (h > RPIVID_MAX_HEIGHT)
4596 + h = RPIVID_MAX_HEIGHT;
4598 + if (!pix_fmt->plane_fmt[0].sizeimage ||
4599 + pix_fmt->plane_fmt[0].sizeimage > SZ_32M) {
4600 + /* Unspecified or way too big - pick max for size */
4601 + size = rpivid_bit_buf_size(w, h, 2);
4603 + /* Set a minimum */
4604 + size = max_t(u32, SZ_4K, pix_fmt->plane_fmt[0].sizeimage);
4606 + pix_fmt->pixelformat = V4L2_PIX_FMT_HEVC_SLICE;
4607 + pix_fmt->width = w;
4608 + pix_fmt->height = h;
4609 + pix_fmt->num_planes = 1;
4610 + pix_fmt->field = V4L2_FIELD_NONE;
4611 + /* Zero bytes per line for encoded source. */
4612 + pix_fmt->plane_fmt[0].bytesperline = 0;
4613 + pix_fmt->plane_fmt[0].sizeimage = size;
4616 +/* Take any pix_format and make it valid */
4617 +static void rpivid_prepare_dst_format(struct v4l2_pix_format_mplane *pix_fmt)
4619 + unsigned int width = pix_fmt->width;
4620 + unsigned int height = pix_fmt->height;
4621 + unsigned int sizeimage = pix_fmt->plane_fmt[0].sizeimage;
4622 + unsigned int bytesperline = pix_fmt->plane_fmt[0].bytesperline;
4625 + width = RPIVID_DEFAULT_WIDTH;
4626 + if (width > RPIVID_MAX_WIDTH)
4627 + width = RPIVID_MAX_WIDTH;
4629 + height = RPIVID_DEFAULT_HEIGHT;
4630 + if (height > RPIVID_MAX_HEIGHT)
4631 + height = RPIVID_MAX_HEIGHT;
4633 + /* For column formats set bytesperline to column height (stride2) */
4634 + switch (pix_fmt->pixelformat) {
4636 + pix_fmt->pixelformat = V4L2_PIX_FMT_NV12_COL128;
4638 + case V4L2_PIX_FMT_NV12_COL128:
4639 + /* Width rounds up to columns */
4640 + width = ALIGN(width, 128);
4642 + /* 16 aligned height - not sure we even need that */
4643 + height = ALIGN(height, 16);
4645 + * Accept suggested shape if at least min & < 2 * min
4647 + bytesperline = constrain2x(bytesperline, height * 3 / 2);
4650 + * Again allow plausible variation in case added padding is
4653 + sizeimage = constrain2x(sizeimage, bytesperline * width);
4656 + case V4L2_PIX_FMT_NV12_10_COL128:
4657 + /* width in pixels (3 pels = 4 bytes) rounded to 128 byte
4660 + width = ALIGN(((width + 2) / 3), 32) * 3;
4662 + /* 16-aligned height. */
4663 + height = ALIGN(height, 16);
4666 + * Accept suggested shape if at least min & < 2 * min
4668 + bytesperline = constrain2x(bytesperline, height * 3 / 2);
4671 + * Again allow plausible variation in case added padding is
4674 + sizeimage = constrain2x(sizeimage,
4675 + bytesperline * width * 4 / 3);
4679 + pix_fmt->width = width;
4680 + pix_fmt->height = height;
4682 + pix_fmt->field = V4L2_FIELD_NONE;
4683 + pix_fmt->plane_fmt[0].bytesperline = bytesperline;
4684 + pix_fmt->plane_fmt[0].sizeimage = sizeimage;
4685 + pix_fmt->num_planes = 1;
4688 +static int rpivid_querycap(struct file *file, void *priv,
4689 + struct v4l2_capability *cap)
4691 + strscpy(cap->driver, RPIVID_NAME, sizeof(cap->driver));
4692 + strscpy(cap->card, RPIVID_NAME, sizeof(cap->card));
4693 + snprintf(cap->bus_info, sizeof(cap->bus_info),
4694 + "platform:%s", RPIVID_NAME);
4699 +static int rpivid_enum_fmt_vid_out(struct file *file, void *priv,
4700 + struct v4l2_fmtdesc *f)
4704 + // H.265 Slice only currently
4705 + if (f->index == 0) {
4706 + f->pixelformat = V4L2_PIX_FMT_HEVC_SLICE;
4713 +static int rpivid_hevc_validate_sps(const struct v4l2_ctrl_hevc_sps * const sps)
4715 + const unsigned int ctb_log2_size_y =
4716 + sps->log2_min_luma_coding_block_size_minus3 + 3 +
4717 + sps->log2_diff_max_min_luma_coding_block_size;
4718 + const unsigned int min_tb_log2_size_y =
4719 + sps->log2_min_luma_transform_block_size_minus2 + 2;
4720 + const unsigned int max_tb_log2_size_y = min_tb_log2_size_y +
4721 + sps->log2_diff_max_min_luma_transform_block_size;
4723 + /* Local limitations */
4724 + if (sps->pic_width_in_luma_samples < 32 ||
4725 + sps->pic_width_in_luma_samples > 4096)
4727 + if (sps->pic_height_in_luma_samples < 32 ||
4728 + sps->pic_height_in_luma_samples > 4096)
4730 + if (!(sps->bit_depth_luma_minus8 == 0 ||
4731 + sps->bit_depth_luma_minus8 == 2))
4733 + if (sps->bit_depth_luma_minus8 != sps->bit_depth_chroma_minus8)
4735 + if (sps->chroma_format_idc != 1)
4738 + /* Limits from H.265 7.4.3.2.1 */
4739 + if (sps->log2_max_pic_order_cnt_lsb_minus4 > 12)
4741 + if (sps->sps_max_dec_pic_buffering_minus1 > 15)
4743 + if (sps->sps_max_num_reorder_pics >
4744 + sps->sps_max_dec_pic_buffering_minus1)
4746 + if (ctb_log2_size_y > 6)
4748 + if (max_tb_log2_size_y > 5)
4750 + if (max_tb_log2_size_y > ctb_log2_size_y)
4752 + if (sps->max_transform_hierarchy_depth_inter >
4753 + (ctb_log2_size_y - min_tb_log2_size_y))
4755 + if (sps->max_transform_hierarchy_depth_intra >
4756 + (ctb_log2_size_y - min_tb_log2_size_y))
4758 + /* Check pcm stuff */
4759 + if (sps->num_short_term_ref_pic_sets > 64)
4761 + if (sps->num_long_term_ref_pics_sps > 32)
4766 +static inline int is_sps_set(const struct v4l2_ctrl_hevc_sps * const sps)
4768 + return sps && sps->pic_width_in_luma_samples != 0;
4771 +static u32 pixelformat_from_sps(const struct v4l2_ctrl_hevc_sps * const sps,
4776 + if (!is_sps_set(sps) || !rpivid_hevc_validate_sps(sps)) {
4777 + /* Treat this as an error? For now return both */
4779 + pf = V4L2_PIX_FMT_NV12_COL128;
4780 + else if (index == 1)
4781 + pf = V4L2_PIX_FMT_NV12_10_COL128;
4782 + } else if (index == 0) {
4783 + if (sps->bit_depth_luma_minus8 == 0)
4784 + pf = V4L2_PIX_FMT_NV12_COL128;
4785 + else if (sps->bit_depth_luma_minus8 == 2)
4786 + pf = V4L2_PIX_FMT_NV12_10_COL128;
4792 +static struct v4l2_pix_format_mplane
4793 +rpivid_hevc_default_dst_fmt(struct rpivid_ctx * const ctx)
4795 + const struct v4l2_ctrl_hevc_sps * const sps =
4796 + rpivid_find_control_data(ctx, V4L2_CID_STATELESS_HEVC_SPS);
4797 + struct v4l2_pix_format_mplane pix_fmt;
4799 + memset(&pix_fmt, 0, sizeof(pix_fmt));
4800 + if (is_sps_set(sps)) {
4801 + pix_fmt.width = sps->pic_width_in_luma_samples;
4802 + pix_fmt.height = sps->pic_height_in_luma_samples;
4803 + pix_fmt.pixelformat = pixelformat_from_sps(sps, 0);
4806 + rpivid_prepare_dst_format(&pix_fmt);
4810 +static u32 rpivid_hevc_get_dst_pixelformat(struct rpivid_ctx * const ctx,
4813 + const struct v4l2_ctrl_hevc_sps * const sps =
4814 + rpivid_find_control_data(ctx, V4L2_CID_STATELESS_HEVC_SPS);
4816 + return pixelformat_from_sps(sps, index);
4819 +static int rpivid_enum_fmt_vid_cap(struct file *file, void *priv,
4820 + struct v4l2_fmtdesc *f)
4822 + struct rpivid_ctx * const ctx = rpivid_file2ctx(file);
4824 + const u32 pf = rpivid_hevc_get_dst_pixelformat(ctx, f->index);
4829 + f->pixelformat = pf;
4834 + * get dst format - sets it to default if otherwise unset
4835 + * returns a pointer to the struct as a convienience
4837 +static struct v4l2_pix_format_mplane *get_dst_fmt(struct rpivid_ctx *const ctx)
4839 + if (!ctx->dst_fmt_set)
4840 + ctx->dst_fmt = rpivid_hevc_default_dst_fmt(ctx);
4841 + return &ctx->dst_fmt;
4844 +static int rpivid_g_fmt_vid_cap(struct file *file, void *priv,
4845 + struct v4l2_format *f)
4847 + struct rpivid_ctx *ctx = rpivid_file2ctx(file);
4849 + f->fmt.pix_mp = *get_dst_fmt(ctx);
4853 +static int rpivid_g_fmt_vid_out(struct file *file, void *priv,
4854 + struct v4l2_format *f)
4856 + struct rpivid_ctx *ctx = rpivid_file2ctx(file);
4858 + f->fmt.pix_mp = ctx->src_fmt;
4862 +static inline void copy_color(struct v4l2_pix_format_mplane *d,
4863 + const struct v4l2_pix_format_mplane *s)
4865 + d->colorspace = s->colorspace;
4866 + d->xfer_func = s->xfer_func;
4867 + d->ycbcr_enc = s->ycbcr_enc;
4868 + d->quantization = s->quantization;
4871 +static int rpivid_try_fmt_vid_cap(struct file *file, void *priv,
4872 + struct v4l2_format *f)
4874 + struct rpivid_ctx *ctx = rpivid_file2ctx(file);
4875 + const struct v4l2_ctrl_hevc_sps * const sps =
4876 + rpivid_find_control_data(ctx, V4L2_CID_STATELESS_HEVC_SPS);
4880 + for (i = 0; (pixelformat = pixelformat_from_sps(sps, i)) != 0; i++) {
4881 + if (f->fmt.pix_mp.pixelformat == pixelformat)
4885 + // We don't have any way of finding out colourspace so believe
4886 + // anything we are told - take anything set in src as a default
4887 + if (f->fmt.pix_mp.colorspace == V4L2_COLORSPACE_DEFAULT)
4888 + copy_color(&f->fmt.pix_mp, &ctx->src_fmt);
4890 + f->fmt.pix_mp.pixelformat = pixelformat;
4891 + rpivid_prepare_dst_format(&f->fmt.pix_mp);
4895 +static int rpivid_try_fmt_vid_out(struct file *file, void *priv,
4896 + struct v4l2_format *f)
4898 + rpivid_prepare_src_format(&f->fmt.pix_mp);
4902 +static int rpivid_s_fmt_vid_cap(struct file *file, void *priv,
4903 + struct v4l2_format *f)
4905 + struct rpivid_ctx *ctx = rpivid_file2ctx(file);
4906 + struct vb2_queue *vq;
4909 + vq = v4l2_m2m_get_vq(ctx->fh.m2m_ctx, f->type);
4910 + if (vb2_is_busy(vq))
4913 + ret = rpivid_try_fmt_vid_cap(file, priv, f);
4917 + ctx->dst_fmt = f->fmt.pix_mp;
4918 + ctx->dst_fmt_set = 1;
4923 +static int rpivid_s_fmt_vid_out(struct file *file, void *priv,
4924 + struct v4l2_format *f)
4926 + struct rpivid_ctx *ctx = rpivid_file2ctx(file);
4927 + struct vb2_queue *vq;
4930 + vq = v4l2_m2m_get_vq(ctx->fh.m2m_ctx, f->type);
4931 + if (vb2_is_busy(vq))
4934 + ret = rpivid_try_fmt_vid_out(file, priv, f);
4938 + ctx->src_fmt = f->fmt.pix_mp;
4939 + ctx->dst_fmt_set = 0; // Setting src invalidates dst
4941 + vq->subsystem_flags |=
4942 + VB2_V4L2_FL_SUPPORTS_M2M_HOLD_CAPTURE_BUF;
4944 + /* Propagate colorspace information to capture. */
4945 + copy_color(&ctx->dst_fmt, &f->fmt.pix_mp);
4949 +const struct v4l2_ioctl_ops rpivid_ioctl_ops = {
4950 + .vidioc_querycap = rpivid_querycap,
4952 + .vidioc_enum_fmt_vid_cap = rpivid_enum_fmt_vid_cap,
4953 + .vidioc_g_fmt_vid_cap_mplane = rpivid_g_fmt_vid_cap,
4954 + .vidioc_try_fmt_vid_cap_mplane = rpivid_try_fmt_vid_cap,
4955 + .vidioc_s_fmt_vid_cap_mplane = rpivid_s_fmt_vid_cap,
4957 + .vidioc_enum_fmt_vid_out = rpivid_enum_fmt_vid_out,
4958 + .vidioc_g_fmt_vid_out_mplane = rpivid_g_fmt_vid_out,
4959 + .vidioc_try_fmt_vid_out_mplane = rpivid_try_fmt_vid_out,
4960 + .vidioc_s_fmt_vid_out_mplane = rpivid_s_fmt_vid_out,
4962 + .vidioc_reqbufs = v4l2_m2m_ioctl_reqbufs,
4963 + .vidioc_querybuf = v4l2_m2m_ioctl_querybuf,
4964 + .vidioc_qbuf = v4l2_m2m_ioctl_qbuf,
4965 + .vidioc_dqbuf = v4l2_m2m_ioctl_dqbuf,
4966 + .vidioc_prepare_buf = v4l2_m2m_ioctl_prepare_buf,
4967 + .vidioc_create_bufs = v4l2_m2m_ioctl_create_bufs,
4968 + .vidioc_expbuf = v4l2_m2m_ioctl_expbuf,
4970 + .vidioc_streamon = v4l2_m2m_ioctl_streamon,
4971 + .vidioc_streamoff = v4l2_m2m_ioctl_streamoff,
4973 + .vidioc_try_decoder_cmd = v4l2_m2m_ioctl_stateless_try_decoder_cmd,
4974 + .vidioc_decoder_cmd = v4l2_m2m_ioctl_stateless_decoder_cmd,
4976 + .vidioc_subscribe_event = v4l2_ctrl_subscribe_event,
4977 + .vidioc_unsubscribe_event = v4l2_event_unsubscribe,
4980 +static int rpivid_queue_setup(struct vb2_queue *vq, unsigned int *nbufs,
4981 + unsigned int *nplanes, unsigned int sizes[],
4982 + struct device *alloc_devs[])
4984 + struct rpivid_ctx *ctx = vb2_get_drv_priv(vq);
4985 + struct v4l2_pix_format_mplane *pix_fmt;
4987 + if (V4L2_TYPE_IS_OUTPUT(vq->type))
4988 + pix_fmt = &ctx->src_fmt;
4990 + pix_fmt = get_dst_fmt(ctx);
4993 + if (sizes[0] < pix_fmt->plane_fmt[0].sizeimage)
4996 + sizes[0] = pix_fmt->plane_fmt[0].sizeimage;
5003 +static void rpivid_queue_cleanup(struct vb2_queue *vq, u32 state)
5005 + struct rpivid_ctx *ctx = vb2_get_drv_priv(vq);
5006 + struct vb2_v4l2_buffer *vbuf;
5009 + if (V4L2_TYPE_IS_OUTPUT(vq->type))
5010 + vbuf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
5012 + vbuf = v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx);
5017 + v4l2_ctrl_request_complete(vbuf->vb2_buf.req_obj.req,
5019 + v4l2_m2m_buf_done(vbuf, state);
5023 +static int rpivid_buf_out_validate(struct vb2_buffer *vb)
5025 + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
5027 + vbuf->field = V4L2_FIELD_NONE;
5031 +static int rpivid_buf_prepare(struct vb2_buffer *vb)
5033 + struct vb2_queue *vq = vb->vb2_queue;
5034 + struct rpivid_ctx *ctx = vb2_get_drv_priv(vq);
5035 + struct v4l2_pix_format_mplane *pix_fmt;
5037 + if (V4L2_TYPE_IS_OUTPUT(vq->type))
5038 + pix_fmt = &ctx->src_fmt;
5040 + pix_fmt = &ctx->dst_fmt;
5042 + if (vb2_plane_size(vb, 0) < pix_fmt->plane_fmt[0].sizeimage)
5045 + vb2_set_plane_payload(vb, 0, pix_fmt->plane_fmt[0].sizeimage);
5050 +/* Only stops the clock if streaom off on both output & capture */
5051 +static void stop_clock(struct rpivid_dev *dev, struct rpivid_ctx *ctx)
5053 + if (ctx->src_stream_on ||
5054 + ctx->dst_stream_on)
5057 + clk_set_min_rate(dev->clock, 0);
5058 + clk_disable_unprepare(dev->clock);
5061 +/* Always starts the clock if it isn't already on this ctx */
5062 +static int start_clock(struct rpivid_dev *dev, struct rpivid_ctx *ctx)
5066 + rv = clk_set_min_rate(dev->clock, dev->max_clock_rate);
5068 + dev_err(dev->dev, "Failed to set clock rate\n");
5072 + rv = clk_prepare_enable(dev->clock);
5074 + dev_err(dev->dev, "Failed to enable clock\n");
5081 +static int rpivid_start_streaming(struct vb2_queue *vq, unsigned int count)
5083 + struct rpivid_ctx *ctx = vb2_get_drv_priv(vq);
5084 + struct rpivid_dev *dev = ctx->dev;
5087 + if (!V4L2_TYPE_IS_OUTPUT(vq->type)) {
5088 + ctx->dst_stream_on = 1;
5092 + if (ctx->src_fmt.pixelformat != V4L2_PIX_FMT_HEVC_SLICE) {
5094 + goto fail_cleanup;
5097 + if (ctx->src_stream_on)
5100 + ret = start_clock(dev, ctx);
5102 + goto fail_cleanup;
5104 + if (dev->dec_ops->start)
5105 + ret = dev->dec_ops->start(ctx);
5107 + goto fail_stop_clock;
5109 + ctx->src_stream_on = 1;
5114 + stop_clock(dev, ctx);
5116 + v4l2_err(&dev->v4l2_dev, "%s: qtype=%d: FAIL\n", __func__, vq->type);
5117 + rpivid_queue_cleanup(vq, VB2_BUF_STATE_QUEUED);
5121 +static void rpivid_stop_streaming(struct vb2_queue *vq)
5123 + struct rpivid_ctx *ctx = vb2_get_drv_priv(vq);
5124 + struct rpivid_dev *dev = ctx->dev;
5126 + if (V4L2_TYPE_IS_OUTPUT(vq->type)) {
5127 + ctx->src_stream_on = 0;
5128 + if (dev->dec_ops->stop)
5129 + dev->dec_ops->stop(ctx);
5131 + ctx->dst_stream_on = 0;
5134 + rpivid_queue_cleanup(vq, VB2_BUF_STATE_ERROR);
5136 + vb2_wait_for_all_buffers(vq);
5138 + stop_clock(dev, ctx);
5141 +static void rpivid_buf_queue(struct vb2_buffer *vb)
5143 + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
5144 + struct rpivid_ctx *ctx = vb2_get_drv_priv(vb->vb2_queue);
5146 + v4l2_m2m_buf_queue(ctx->fh.m2m_ctx, vbuf);
5149 +static void rpivid_buf_request_complete(struct vb2_buffer *vb)
5151 + struct rpivid_ctx *ctx = vb2_get_drv_priv(vb->vb2_queue);
5153 + v4l2_ctrl_request_complete(vb->req_obj.req, &ctx->hdl);
5156 +static struct vb2_ops rpivid_qops = {
5157 + .queue_setup = rpivid_queue_setup,
5158 + .buf_prepare = rpivid_buf_prepare,
5159 + .buf_queue = rpivid_buf_queue,
5160 + .buf_out_validate = rpivid_buf_out_validate,
5161 + .buf_request_complete = rpivid_buf_request_complete,
5162 + .start_streaming = rpivid_start_streaming,
5163 + .stop_streaming = rpivid_stop_streaming,
5164 + .wait_prepare = vb2_ops_wait_prepare,
5165 + .wait_finish = vb2_ops_wait_finish,
5168 +int rpivid_queue_init(void *priv, struct vb2_queue *src_vq,
5169 + struct vb2_queue *dst_vq)
5171 + struct rpivid_ctx *ctx = priv;
5174 + src_vq->type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE;
5175 + src_vq->io_modes = VB2_MMAP | VB2_DMABUF;
5176 + src_vq->drv_priv = ctx;
5177 + src_vq->buf_struct_size = sizeof(struct rpivid_buffer);
5178 + src_vq->ops = &rpivid_qops;
5179 + src_vq->mem_ops = &vb2_dma_contig_memops;
5180 + src_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY;
5181 + src_vq->lock = &ctx->ctx_mutex;
5182 + src_vq->dev = ctx->dev->dev;
5183 + src_vq->supports_requests = true;
5184 + src_vq->requires_requests = true;
5186 + ret = vb2_queue_init(src_vq);
5190 + dst_vq->type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
5191 + dst_vq->io_modes = VB2_MMAP | VB2_DMABUF;
5192 + dst_vq->drv_priv = ctx;
5193 + dst_vq->buf_struct_size = sizeof(struct rpivid_buffer);
5194 + dst_vq->min_buffers_needed = 1;
5195 + dst_vq->ops = &rpivid_qops;
5196 + dst_vq->mem_ops = &vb2_dma_contig_memops;
5197 + dst_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY;
5198 + dst_vq->lock = &ctx->ctx_mutex;
5199 + dst_vq->dev = ctx->dev->dev;
5201 + return vb2_queue_init(dst_vq);
5204 +++ b/drivers/staging/media/rpivid/rpivid_video.h
5206 +/* SPDX-License-Identifier: GPL-2.0 */
5208 + * Raspberry Pi HEVC driver
5210 + * Copyright (C) 2020 Raspberry Pi (Trading) Ltd
5212 + * Based on the Cedrus VPU driver, that is:
5214 + * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
5215 + * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
5216 + * Copyright (C) 2018 Bootlin
5219 +#ifndef _RPIVID_VIDEO_H_
5220 +#define _RPIVID_VIDEO_H_
5222 +struct rpivid_format {
5225 + unsigned int capabilities;
5228 +extern const struct v4l2_ioctl_ops rpivid_ioctl_ops;
5230 +int rpivid_queue_init(void *priv, struct vb2_queue *src_vq,
5231 + struct vb2_queue *dst_vq);
5233 +size_t rpivid_bit_buf_size(unsigned int w, unsigned int h, unsigned int bits_minus8);
5234 +size_t rpivid_round_up_size(const size_t x);
5236 +void rpivid_prepare_src_format(struct v4l2_pix_format_mplane *pix_fmt);