Video and Vision Processing Suite Intel® FPGA IP User Guide

ID 683329
Date 12/31/2023
Public

A newer version of this document is available. Customers should click here to go to the newest version.

Document Table of Contents
1. About the Video and Vision Processing Suite 2. Getting Started with the Video and Vision Processing IPs 3. Video and Vision Processing IPs Functional Description 4. Video and Vision Processing IP Interfaces 5. Video and Vision Processing IP Registers 6. Video and Vision Processing IPs Software Programming Model 7. Protocol Converter Intel® FPGA IP 8. 1D LUT Intel® FPGA IP 9. 3D LUT Intel® FPGA IP 10. AXI-Stream Broadcaster Intel® FPGA IP 11. Bits per Color Sample Adapter Intel FPGA IP 12. Black Level Correction Intel® FPGA IP 13. Black Level Statistics Intel® FPGA IP 14. Chroma Key Intel® FPGA IP 15. Chroma Resampler Intel® FPGA IP 16. Clipper Intel® FPGA IP 17. Clocked Video Input Intel® FPGA IP 18. Clocked Video to Full-Raster Converter Intel® FPGA IP 19. Clocked Video Output Intel® FPGA IP 20. Color Space Converter Intel® FPGA IP 21. Defective Pixel Correction Intel® FPGA IP 22. Deinterlacer Intel® FPGA IP 23. Demosaic Intel® FPGA IP 24. FIR Filter Intel® FPGA IP 25. Frame Cleaner Intel® FPGA IP 26. Full-Raster to Clocked Video Converter Intel® FPGA IP 27. Full-Raster to Streaming Converter Intel® FPGA IP 28. Genlock Controller Intel® FPGA IP 29. Generic Crosspoint Intel® FPGA IP 30. Genlock Signal Router Intel® FPGA IP 31. Guard Bands Intel® FPGA IP 32. Histogram Statistics Intel® FPGA IP 33. Interlacer Intel® FPGA IP 34. Mixer Intel® FPGA IP 35. Pixels in Parallel Converter Intel® FPGA IP 36. Scaler Intel® FPGA IP 37. Stream Cleaner Intel® FPGA IP 38. Switch Intel® FPGA IP 39. Tone Mapping Operator Intel® FPGA IP 40. Test Pattern Generator Intel® FPGA IP 41. Unsharp Mask Intel® FPGA IP 42. Video and Vision Monitor Intel FPGA IP 43. Video Frame Buffer Intel® FPGA IP 44. Video Frame Reader Intel FPGA IP 45. Video Frame Writer Intel FPGA IP 46. Video Streaming FIFO Intel® FPGA IP 47. Video Timing Generator Intel® FPGA IP 48. Vignette Correction Intel® FPGA IP 49. Warp Intel® FPGA IP 50. White Balance Correction Intel® FPGA IP 51. White Balance Statistics Intel® FPGA IP 52. Design Security 53. Document Revision History for Video and Vision Processing Suite User Guide

7.3. Protocol Converter IP Functional Description

Protocol Converter - Avalon Streaming Video to Intel FPGA Streaming Video Lite

The IP converts the protocol in steps:

  1. Changes the Avalon Streaming ready latency from 1 to 0.

    Avalon Streaming Video specifies a ready latency of 1. The IP converts the ready latency to 0 to match the ready-valid handshake mechanism specified for AXI4-Stream.

  2. Removes all packets other than video packets from the stream.

    Avalon Streaming Video specifies a mechanism to assign a type identifier (a number between 0 and 15) to each packet in the stream. Type-0 packets are frames of pixel data and all other packet types indicate nonvideo data. Packets of type-15 (referred to as metapackets) contain metadata to specify the width, height, and interlacing properties of subsequent type-0 video packets. Intel FPGA Streaming Video lite does not allow for metapackets in the stream. The IP discards all packets with a type that is greater than 0. The IP does not propagate type-15 metapackets that specify the properties of the video. However, it parses them during the discard process to extract the expected width of the video frames that follow. The IP uses this information in the next step of the conversion.

  3. Splits frame packets into line packets.

    Avalon Streaming Video specifies that each video packet contains one cycle of header data followed by all of the pixels required for an interlaced frame or progressive frame of video. The header data specifies the packet type. Intel FPGA Streaming Video requires that each packet is one line of video data (with no header information). The IP strips out incoming Avalon Streaming Video frame packets and then splits them into multiple packets. Each packet contains a single video line. The IP extracts the expected width of the video frame from the discarded metapacket that precedes the frame. The IP uses this value to determine where the incoming frame packet should be split to make each output line packet. The IP replaces the Avalon Streaming startofpacket and endofpacket signals with the AXI4-Stream tlast signal and creates the tuser signal. The IP reformats the incoming Avalon Streaming data to create byte-aligned AXI4-Stream tdata

Avalon streaming video to Intel FPGA streaming video full

Converting Avalon streaming video to Intel FPGA streaming video full is similar to converting to Intel FPGA streaming video lite. Steps 1 and 3 are the same, but step 2 is different because Intel FPGA streaming video full supports nonvideo packets. The nonvideo data in the Avalon streaming video input can be retained and does not need to be discarded.

Nonvideo packets in Intel FPGA streaming video full are indicated by asserting bit 1 of the tuser field on the first cycle of the packet. As with Avalon streaming video, each nonvideo packet in Intel FPGA streaming video full has an associated packet type, which is indicated in the five LSBs of the first beat of data.

Intel FPGA streaming video full requires an image information packet (type 0) that precedes the packets for each video field and followed by an end-of-field packet (type 1). The IP adds these to the incoming stream. The image information packet is similar to the control packet in the Avalon streaming video input. It contains information about the width, height, and interlaced format of the following field, which can be sourced directly from the Avalon streaming video control packet. Further information about the input color space, chroma sampling and siting, which is not in the incoming control packet comes either from the register map or parameters, depending on how you configure the IP.

You can select how the IP processes user packets (types 1-14) in the Avalon streaming video input via the How Avalon-ST Video user packets are handled parameter. You can select that these types of packets never exist, in which case no provision is made in the IP to process them (if any are received the IP may lock-up). You can select that the IP discards these packets, in which case the IP adds some additional logic to detect and discard them, as is the case in conversion to Intel FPGA streaming video lite.

The final option that is not available when converting to Intel FPGA Streaming video lite but is available when converting to Intel FPGA streaming video full, is to propagate the incoming user packets. However, Intel FPGA streaming video full cannot insert the user packet directly into the main video stream as nonvideo packets are restricted to only use 16 bits of the data signal, and to have a maximum length of 4 data beats.

To process larger nonvideo packets, Intel FPGA streaming video includes the option for a separate auxiliary interface, which follows the same protocol as the main video interface. The video packets on the video interface (those with tuser bit 1 set to 0) contain video lines. The data packets on the auxiliary interface may contain any nonvideo data you require. When you select Pass all user packets through to the output for Avalon streaming video user packet handling, the header beat of the Avalon streaming video user packet (which contains the packet type) is clipped from the packet, and the remainder of the data is routed to the auxiliary output interface. To allow the packets on the auxiliary interface to be synchronized with the main video interface, a single beat nonvideo packet is included in the main stream at the point where the user packet was present in the original Avalon streaming video input. Intel FPGA streaming video full sets aside the nonvideo packet types in the range 16-31 for free use by the system. The IP modifies the original Avalon streaming video packet type (in the range 1-14) by adding 16 to move it into the range allowed by Intel streaming video full, and the packet created uses this type. The newly created packet also sets a flag in bit 5 to indicate that a packet on the auxiliary interface is associated with this nonvideo packet. You should read it at this point in the video stream.

Protocol Converter - Intel FPGA Streaming Video Lite to Avalon Streaming Video

The IP converts the protocol in 3 steps.

  1. Combines line packets into a single frame packet.

    Intel FPGA Streaming Video specifies that you transmit video data with one video line per packet. Avalon Streaming Video requires that you transmit all the pixels in a frame in a single packet. The incoming line packets must merge to form one frame packet. If you transmit a single pixel per clock cycle, bit 0 of the incoming tuser signal marks the first pixel of each frame. The IP concatenates packets until bit 0 of tuser is asserted. If the number of pixels per line is not a multiple of the pixels per clock, the extra pixels in the final clock cycle of data are effectively empty and you must ignore them. You must specify the width of the incoming video frame via the register map. The IP uses this width information to determine which (if any) pixels it should ignore at the end of each line when concatenating the packets.

  2. Adds the frame packet header and the control packet

    Avalon Streaming Video requires that each frame packet begins with a one cycle header specifying a packet type of 0. The IP adds this header to the frame packet created previously. Avalon Streaming Video also recommends that each frame packet is preceded by a control packet (of type-15) that specifies the width, height, and interlacing scheme of the following frame. You supply the width and height via the register map, as the initial value for interlacing specifier. The IP uses these values to control Avalon Streaming control packets that it adds to the stream. If you select an interlacing specifier for progressive video, the IP uses this value for all control packets. If the interlacing specifier identifies an interlaced scheme, the IP toggles the f0/f1 bit automatically in the outgoing control packets.

  3. Converts Avalon Streaming ready latency 0 to 1

    The IP replaces the AXI4-Stream tlast signal with the Avalon Streaming startofpacket and endofpacket signals. The IP creates the empty signal (if the number of pixels per clock cycle is greater than 1). The IP reformats the AXI4-Stream byte aligned tdata to non-byte aligned Avalon Streaming data. The interface is now compliant with the Avalon Streaming protocol, but with a ready latency of 0. Avalon Streaming Video requires that the ready latency is 1, so the IP converts the ready latency from 0 to 1.

Intel FPGA streaming video full to Avalon streaming video

Converting Intel FPGA streaming video full to Avalon streaming video is similar to converting from Intel FPGA streaming video lite. Steps 1 and 3 are the same, but step 2 is different because the Intel FPGA streaming video full supports nonvideo packets.

You can extract the information required to populate the Avalon streaming video control packet from the image info packet in the Intel FPGA streaming video stream. The Avalon memory-mapped control agent interface is not required.

All Intel FPGA streaming video nonvideo packets with a type in the range 0-15 have specified or reserved functions and cannot be translated to Avalon streaming video. The IP removes these packets from the stream regardless of the value you select for the How Intel FPGA Streaming Video aux packets are handled parameter. If you select Discard all user packets received, the IP discards all user-defined auxiliary control packets received from the Intel FPGA Streaming video input. If you select Discard all user packets received, the optional auxiliary streaming input interface turns on. The IP discards any packets received from this interface, as this interface can carry associated payloads for auxiliary control packets.

If you select the option to pass through auxiliary packets, the IP must discard in-band packets with types 0-15 as these do not translate to Avalon streaming video. The IP must discard types 16, 28 and 31 as they map to reserved types in Avalon streaming video. For the remaining packet types, the IP looks at the value of the flag in bit 5 of the first data beat indicating if a packet on the auxiliary interface accompanies the packet. If the flag is not set, the IP creates an output packet containing just the header beat with the packet type minus 16. If the flag is set, the IP propagates a packet from the auxiliary interface to the output, with a single beat header appended to the front containing the in-band packet type minus 16. This process is the reverse of propagating user packets when converting from Avalon streaming video to Intel FPGA streaming video full.

The in-band packets that indicate a packet exists on the auxiliary interface may have between 1 and 4 beats of data. However, the IP only reads the data in the first beat, as it ignores all other data in the conversion process. The auxiliary interface conforms to the Intel FPGA streaming video protocol and can carry all packet types. The IP discards all image information, end of field, and all auxiliary control packets. The IP forwards all other packets, as these are data packets for any auxiliary control packets received on the main Intel FPGA Streaming video input.

Intel FPGA streaming video full allows auxiliary control packets to arrive in the stream both before the field data (the line packets), and between the final lines packet and the end of frame packet (both before and after the field). Avalon streaming video only allows user packets before the field packet. The IP considers any packets after the field as part of the next field. If the IP encounters auxiliary control packets after the field in Intel FPGA streaming video full, it still converts them to Avalon streaming video, but considers them part of the following frame. You should avoid the use of auxiliary control packets with types 17-31 after the field if this behavior is not acceptable.

Intel FPGA streaming video lite to Intel FPGA streaming video full

Converting from the lite variant to the full variant of Intel FPGA streaming video is a light-weight process because both variants use the same data format and packetization to transmit lines of video. You must add an image information packet before the first line of each field, and an end-of-field packet after the final line. The IP takes the information required to populate the image information packet from the register map. The broken field flag is the only information required for the end-of-field packet. The broken flag is set if the number of lines received for the given frame does not match the field height set in the register map and the field count. The field count is set from a counter that you can reset to zero via the register map at any time, and which increments at the end of each field.

Intel FPGA streaming video full to Intel FPGA streaming video lite

When converting from the full variant to the lite variant of Intel FPGA streaming video both variants use the same data format and packetization to transmit lines of video. To convert, the IP removes any image information, end of field, or auxiliary control packets (marked by asserting bit 1 of tuser for the first beat of the packet) from the input stream.

Pixel data format

The pixel data format for Avalon Streaming Video and Intel FPGA Streaming Video is almost identical. Intel FPGA Streaming Video requires that the width of each pixel is rounded up to the next whole number of bytes. Any extra bits required can be filled with zeros, ones, or any random data. Avalon Streaming Video has no such requirement and uses only the required bits for each pixel. The Protocol Converter adds the required extra bits when converting from Avalon Streaming Video to Intel FPGA Streaming Video. It removes them when converting from Intel FPGA Streaming Video to Avalon Streaming Video.

Avalon Streaming Video and Intel FPGA Streaming Video both specify how the color planes in each pixel should be arranged for RGB and YCbCr formatted data. For YCbCr data, the protocols specify the color plane ordering for 4:4:4, 4:2:2 and 4:2:0 chroma sampling. The color plane ordering is almost identical between the two protocols, apart from swapping Y and Cr planes in the case of YCbCr 4:4:4. The Protocol Converter IP can implement the swap, but you must specify the color space and chroma sampling for each frame. You can specify either via the parameters or the register map accessed through an Avalon memory-mapped agent interface.

You can turn on or turn off the Avalon memory-mapped agent interface via a parameter. If the Avalon memory-mapped agent interface is turned on, specify the color space and chroma sampling at run time via the register map. If the Avalon memory-mapped agent interface is not turned on, specify the color space and chroma sampling in the Video color space and Video chroma sampling parameters respectively.

If the Protocol Convert IP converts from Intel FPGA Streaming Video to Avalon Streaming Video, turn on the Avalon memory-mapped agent interface and do not use the parameters. If the Protocol Convert IP converts from Avalon Streaming Video to Intel FPGA Streaming Video, the Avalon memory-mapped agent interface is optional. If you know the color space and chroma sampling are fixed for the system, you can opt to turn off the agent interface and specify the color space and chroma sampling via the parameters. If the color space and chroma sampling may vary at run time, turn on the agent interface and specify the values in the register map.

For conversions both ways, the IP gates the color plane swap for YCbCr 4:4:4 formatted data by the YCbCr 444 color swap parameter. You must turn on this option for the IP to apply the color plane swap.

4:2:0 chroma sampled video

Avalon streaming video requires you to declare a fixed number of color planes per pixel. The packing of 4:2:0 color planes does not fully align with this definition of a pixel. 4 luma samples exist for every pair of Cb and Cr samples, so the effective number of color planes per pixel is 1.5 (1 luma sample and ½ a chroma sample). You cannot configure Avalon streaming video to have 1.5 colors per pixel, so the IP packs two luma samples with each chroma to make an atom of transport on the Avalon streaming video interface. The Avalon streaming video protocol considers this group of three color planes to be a pixel, even though it contains 2 luma samples and is effectively 2 pixels. The width value reported in the control packet counts the number of 3 color plane groups that interface treats as a pixel, so the width value reported is always half the actual field width for 4:2:0 sampled data.

Both variants of Intel FPGA streaming video protocol pack two luma samples into an atom of transport that you consider a pixel in 4:4:4 or 4:2:2 sampling. The protocol similarly defines chroma sampling. The field width specified in the image information packets (when using the full variant of Intel FPGA streaming video) and the register map is always the true field width.

The protocol converter IP requires you to specify the chroma sampling and automatically doubles or halves the reported widths where required.

End of field detection in Intel FPGA streaming video lite

For Avalon Streaming Video, the endofpacket signal marks the end of each video frame that is asserted on the final pixel of the video data packet.

In Avalon Streaming Video, you cannot transmit the final pixel of each frame until you are certain that it is the final pixel, otherwise you risk driving the endofpacket signal incorrectly.

For Intel FPGA Streaming Video the end of each frame is inferred by receiving the start-of-frame marker for the next frame, which is explicitly indicated in the protocol. You might see potential latency issues when converting from Intel FPGA Streaming Video to Avalon Streaming Video. The IP cannot transmit the final pixel of each frame at the output until it receives the first pixel of the next frame at the input. When converting to Intel FPGA streaming video full, the IP cannot send the EOF packet for a given frame until it sees the first pixel of the next frame.

If the video application has no significant blanking (delay) between the last pixel of one frame and the first pixel of the next frame, the IP gives little or no delay in sending out the final pixel of each frame (Avalon streaming video) or end of frame packet (Intel FPGA streaming video full). If the application does have significant blanking, the delay to transmit the final pixel or end of frame may be too long. The Protocol Converter IP includes an option to remove this delay.

If you turn on Low latency mode, the Protocol Converter IP transmits the Avalon Streaming Video frame endofpacket or Intel FPGA streaming video full EOF packet according to the number of lines it expects in each frame, as you specify in the register map. The Intel FPGA Streaming Video protocol transmits each line of video data as a packet, so the IP terminates the output frame at the end of the input packet for the specified number of lines. If the IP receives any additional lines, the IP discards them and does not transmit them at the output.