Open-source rtp library for high-speed 4k hevc video streaming

uvgRTP

uvgRTP is an Real-Time Transport Protocol (RTP) library written in C++ with a focus on simple to use and high-efficiency media delivery over the Internet. It features an intuitive and easy-to-use Application Programming Interface (API), built-in support for transporting Versatile Video Coding (VVC), High Efficiency Video Coding (HEVC), Advanced Video Coding (AVC) encoded video and Opus encoded audio. uvgRTP also supports End-to-End Encrypted (E2EE) media delivery using the combination of Secure RTP (SRTP) and ZRTP. uvgRTP has been designed to minimize memory operations to reduce its CPU usage and latency.

uvgRTP is licensed under the permissive BSD 2-Clause License. This cross-platform library can be run on both Linux and Windows operating systems. Mac OS is also supported, but the support relies on community contributions. For SRTP/ZRTP support, uvgRTP uses the Crypto++ library.

Currently supported specifications:

Notable features

  • AVC/HEVC/VVC video streaming, including packetization
  • Ready support for many formats which don’t need packetization, including Opus
  • Delivery encryption with SRTP
  • Encryption key negotiation with ZRTP
  • UDP firewall hole punching
  • Simple to use API
  • Working examples
  • Permissive license

Building and linking

See BUILDING.md for instructions on how to build and use uvgRTP.

Learning to use uvgRTP

The examples folder contains working examples and a basic tutorial on uvgRTP usage. For a more detailed description, check the documentation where you can find info on advanced topics and descriptions of all uvgRTP flags.

Furthermore, we also provide a Doxygen documentation of the public API on Github.

Contributing

We warmly welcome any contributions to the project. If you are considering submitting a pull request, please read CONTRIBUTING.md before proceeding.

Papers

If you use uvgRTP in your research, please cite one of the following papers:

Open-Source RTP Library for High-Speed 4K HEVC Video Streaming

A. Altonen, J. Räsänen, J. Laitinen, M. Viitanen, and J. Vanne, “Open-source RTP library for high-speed 4K HEVC video streaming,” in Proc. IEEE Int. Workshop on Multimedia Signal Processing, Tampere, Finland, Sept. 2020.

UVGRTP 2.0: Open-Source RTP Library For Real-Time VVC/HEVC Streaming

A. Altonen, J. Räsänen, A. Mercat, and J. Vanne, “uvgRTP 2.0: open-source RTP library for real-time VVC/HEVC streaming,” in Proc. IEEE Int. Conf. Multimedia Expo, Shenzhen, China, July 2021.

Open-source RTP Library for End-to-End Encrypted Real-Time Video Streaming Applications

J. Räsänen, A. Altonen, A. Mercat, and J. Vanne, “Open-source RTP library for end-to-end encrypted real-time video streaming applications,” in Proc. IEEE Int. Symp. Multimedia, Naples, Italy, Nov.-Dec. 2021.

Test framework

We also have an easy to use performance test framework for benchmarking uvgRTP against FFmpeg and Live555 on Linux OS. The framework can be found here.

Origin

The original version of uvgRTP is based on Marko Viitanen’s fRTPlib library.

There are not currently any lossless—or even near-lossless—video codecs generally available in web browsers. The reason for this is simple: video is huge. Lossless compression is by definition less effective than lossy compression. For example, uncompressed 1080p video (1920 by 1080 pixels) with 4:2:0 chroma subsampling needs at least 1.5 Gbps. Using lossless compression such as FFV1 (which is not supported by web browsers) could perhaps reduce that to somewhere around 600 Mbps, depending on the content. That’s still a huge number of bits to pump through a connection every second, and is not currently practical for any real-world use.

This is the case even though some of the lossy codecs have a lossless mode available; the lossless modes are not implemented in any current web browsers. The best you can do is to select a high-quality codec that uses lossy compression and configure it to perform as little compression as possible. One way to do this is to configure the codec to use “fast” compression, which inherently means less compression is achieved.

Preparing video externally

To prepare video for archival purposes from outside your web site or app, use a utility that performs compression on the original uncompressed video data. For example, the free x264 utility can be used to encode video in AVC format using a very high bit rate:

x264 

--crf

18

-preset

ultrafast

--output

outfilename.mp4 infile

While other codecs may have better best-case quality levels when compressing the video by a significant margin, their encoders tend to be slow enough that the nearly-lossless encoding you get with this compression is vastly faster at about the same overall quality level.

Recording video

Given the constraints on how close to lossless you can get, you might consider using AVC or AV1. For example, if you’re using the MediaStream Recording API to record video, you might use code like the following when creating your MediaRecorder object:

const

kbps

=

1024

;

const

Mbps

=

kbps

*

kbps

;

const

options

=

{

mimeType

:

'video/webm; codecs="av01.2.19H.12.0.000.09.16.09.1, flac"'

,

bitsPerSecond

:

800

*

Mbps

,

}

;

let

recorder

=

new

MediaRecorder

(

sourceStream

,

options

)

;

This example creates a MediaRecorder configured to record AV1 video using BT.2100 HDR in 12-bit color with 4:4:4 chroma subsampling and FLAC for lossless audio. The resulting file will use a bit rate of no more than 800 Mbps shared between the video and audio tracks. You will likely need to adjust these values depending on hardware performance, your requirements, and the specific codecs you choose to use. This bit rate is obviously not realistic for network transmission and would likely only be used locally.

Breaking down the value of the codecs parameter into its dot-delineated properties, we see the following:

Value Description av01 The four-character code (4CC) designation identifying the AV1 codec. 2 The profile. A value of 2 indicates the Professional profile. A value of 1 is the High profile, while a value of 0 would specify the Main profile. 19H The level and tier. This value comes from the table in section A.3 of the AV1 specification, and indicates the high tier of Level 6.3. 12 The color depth. This indicates 12 bits per component. Other possible values are 8 and 10, but 12 is the highest-accuracy color representation available in AV1. 0 The monochrome mode flag. If 1, then no chroma planes would be recorded, and all data should be strictly luma data, resulting in a greyscale image. We’ve specified 0 because we want color. 000 The chroma subsampling mode, taken from section 6.4.2 in the AV1 specification. A value of 000, combined with the monochrome mode value 0, indicates that we want 4:4:4 chroma subsampling, or no loss of color data. 09 The color primaries to use. This value comes from section 6.4.2 in the AV1 specification; 9 indicates that we want to use BT.2020 color, which is used for HDR. 16 The transfer characteristics to use. This comes from section 6.4.2 as well; 16 indicates that we want to use the characteristics for BT.2100 PQ color. 09 The matrix coefficients to use, from the section 6.4.2 again. A value of 9 specifies that we want to use BT.2020 with variable luminance; this is also known as BT.2010 YbCbCr. 1 The video “full range” flag. A value of 1 indicates that we want the full color range to be used.

The documentation for your codec choices will probably offer information you’ll use when constructing your codecs parameter.

Open and royalty-free video coding format developed by the Alliance for Open Media

AOMedia Video 1 (AV1) is an open, royalty-free video coding format initially designed for video transmissions over the Internet. It was developed as a successor to VP9 by the Alliance for Open Media (AOMedia),[2] a consortium founded in 2015 that includes semiconductor firms, video on demand providers, video content producers, software development companies and web browser vendors. The AV1 bitstream specification includes a reference video codec.[1] In 2018, Facebook conducted testing that approximated real-world conditions, and the AV1 reference encoder achieved 34%, 46.2% and 50.3% higher data compression than libvpx-vp9, x264 High profile, and x264 Main profile respectively.[3]

Like VP9, but unlike H.264/AVC and HEVC, AV1 has a royalty-free licensing model that does not hinder adoption in open-source projects.[4][5][6][7][2][8]

AVIF is an image file format that uses AV1 compression algorithms.

History

[

edit

]

AV1 logo prior to 2018

The Alliance’s motivations for creating AV1 included the high cost and uncertainty involved with the patent licensing of HEVC, the MPEG-designed codec expected to succeed AVC.[9][7] Additionally, the Alliance’s seven founding members – Amazon, Cisco, Google, Intel, Microsoft, Mozilla and Netflix – announced that the initial focus of the video format would be delivery of high-quality web video.[10] The official announcement of AV1 came with the press release on the formation of the Alliance for Open Media on 1 September 2015. Only 42 days before, on 21 July 2015, HEVC Advance’s initial licensing offer was announced to be an increase over the royalty fees of its predecessor, AVC.[11] In addition to the increased cost, the complexity of the licensing process increased with HEVC. Unlike previous MPEG standards where the technology in the standard could be licensed from a single entity, MPEG-LA, when the HEVC standard was finished, two patent pools had been formed with a third pool on the horizon. In addition, various patent holders were refusing to license patents via either pool, increasing uncertainty about HEVC’s licensing. According to Microsoft’s Ian LeGrow, an open-source, royalty-free technology was seen as the easiest way to eliminate this uncertainty around licensing.[9]

The negative effect of patent licensing on free and open-source software has also been cited as a reason for the creation of AV1.[7] For example, building an H.264 implementation into Firefox would prevent it from being distributed free of charge since licensing fees would have to be paid to MPEG-LA.[12] Free Software Foundation Europe has argued that FRAND patent licensing practices make the free software implementation of standards impossible due to various incompatibilities with free software licenses.[8]

Many of the components of the AV1 project were sourced from previous research efforts by Alliance members. Individual contributors had started experimental technology platforms years before: Xiph’s/Mozilla’s Daala published code in 2010, Google’s experimental VP9 evolution project VP10 was announced on 12 September 2014,[13] and Cisco’s Thor was published on 11 August 2015. Building on the code base of VP9, AV1 incorporates additional techniques, several of which were developed in these experimental formats.[14]Many companies are part of Alliance for Open Media, including Samsung, Vimeo, Microsoft, Netflix, Mozilla, AMD, Nvidia, Intel and ARM, Google, Facebook, Cisco, Amazon, Hulu, VideoLAN, Adobe and Apple. Apple is one of the main members of AOmedia, although it joined after the formation. The management of the AV1 streams has been officially included among the typological videos manageable by Coremedia.[15]

The first version 0.1.0 of the AV1 reference codec was published on 7 April 2016. Although a soft feature freeze came into effect at the end of October 2017, development continued on several significant features. One of these, the bitstream format, was projected to be frozen in January 2018 but was delayed due to unresolved critical bugs as well as further changes to transformations, syntax, the prediction of motion vectors, and the completion of legal analysis.[citation needed] The Alliance announced the release of the AV1 bitstream specification on 28 March 2018, along with a reference, software-based encoder and decoder.[16] On 25 June 2018, a validated version 1.0.0 of the specification was released.[17] On 8 January 2019 a validated version 1.0.0 with Errata 1 of the specification was released.

Martin Smole from AOM member Bitmovin said that the computational efficiency of the reference encoder was the greatest remaining challenge after the bitstream format freeze had been completed.[18] While working on the format, the encoder was not targeted for production use and speed optimizations were not prioritized. Consequently, the early version of AV1 was orders of magnitude slower than existing HEVC encoders. Much of the development effort was consequently shifted towards maturing the reference encoder. In March 2019, it was reported that the speed of the reference encoder had improved greatly and within the same order of magnitude as encoders for other common formats.[19]

On 21 January 2021, the MIME type of AV1 was defined as video/AV1. The usage of AV1 using this MIME type is restricted to Real-time Transport Protocol purposes only.[20]

Purpose

[

edit

]

AV1 aims to be a video format for the web that is both state of the art and royalty free.[2] According to Matt Frost, head of strategy and partnerships in Google’s Chrome Media team, “The mission of the Alliance for Open Media remains the same as the WebM project.”[21]

A recurring concern in standards development, not least of royalty-free multimedia formats, is the danger of accidentally infringing on patents that their creators and users did not know about. This concern has been raised regarding AV1,[22] and previously VP8,[23] VP9,[24] Theora[25] and IVC.[26] The problem is not unique to royalty-free formats, but it uniquely threatens their status as royalty-free.

Patent licensingAV1, VP9, TheoraHEVC, AVCGIF, MP3, MPEG-1, MPEG-2, MPEG-4 Part 2By known patent holdersRoyalty-freeRoyalty bearingPatents expiredBy unknown patent holdersImpossible to ascertain until the format is old
enough that any patents would have expired
(at least 20 years in WTO countries)

To fulfill the goal of being royalty free, the development process requires that no feature can be adopted before it has been confirmed independently by two separate parties to not infringe on patents of competing companies. In cases where an alternative to a patent-protected technique is not available, owners of relevant patents have been invited to join the Alliance (even if they were already members of another patent pool). For example, Alliance members Apple, Cisco, Google, and Microsoft are also licensors in MPEG-LA’s patent pool for H.264.[22] As an additional protection for the royalty-free status of AV1, the Alliance has a legal defense fund to aid smaller Alliance members or AV1 licensees in the event they are sued for alleged patent infringement.[22][6][27]

Under patent rules adopted from the World Wide Web Consortium (W3C), technology contributors license their AV1-connected patents to anyone, anywhere, anytime based on reciprocity (i.e. as long as the user does not engage in patent litigation).[28] As a defensive condition, anyone engaging in patent litigation loses the right to the patents of all patent holders.[citation needed][29]

This treatment of intellectual property rights (IPR), and its absolute priority during development, is contrary to extant MPEG formats like AVC and HEVC. These were developed under an IPR uninvolvement policy by their standardization organisations, as stipulated in the ITU-T’s definition of an open standard. However, MPEG’s chairman has argued this practice has to change,[30] which it is:[citation needed] EVC is also set to have a royalty-free subset,[31][32] and will have switchable features in its bitstream to defend against future IPR threats.[citation needed]

The creation of royalty-free web standards has been a long-stated pursuit for the industry. In 2007, the proposal for HTML5 video specified Theora as mandatory to implement. The reason was that public content should be encoded in freely implementable formats, if only as a “baseline format”, and that changing such a baseline format later would be hard because of network effects.[33]

The Alliance for Open Media is a continuation of Google’s efforts with the WebM project, which renewed the royalty-free competition after Theora had been surpassed by AVC. For companies such as Mozilla that distribute free software, AVC can be difficult to support as a per-copy royalty is unsustainable given the lack of revenue stream to support these payments in free software (see FRAND § Excluding costless distribution).[4] Similarly, HEVC has not successfully convinced all licensors to allow an exception for freely distributed software (see HEVC § Provision for costless software).

The performance goals include “a step up from VP9 and HEVC” in efficiency for a low increase in complexity. NETVC’s efficiency goal is 25% improvement over HEVC.[34] The primary complexity concern is for software decoding, since hardware support will take time to reach users. However, for WebRTC, live encoding performance is also relevant, which is Cisco’s agenda: Cisco is a manufacturer of videoconferencing equipment, and their Thor contributions aim at “reasonable compression at only moderate complexity”.[35]

Feature-wise, AV1 is specifically designed for real-time applications (especially WebRTC) and higher resolutions (wider color gamuts, higher frame rates, UHD) than typical usage scenarios of the current generation (H.264) of video formats, where it is expected to achieve its biggest efficiency gains. It is therefore planned to support the color space from ITU-R Recommendation BT.2020 and up to 12 bits of precision per color component.[36] AV1 is primarily intended for lossy encoding, although lossless compression is supported as well.[37]

Technology

[

edit

]

AV1 is a traditional block-based frequency transform format featuring new techniques. Based on Google’s VP9,[38] AV1 incorporates additional techniques that mainly give encoders more coding options to enable better adaptation to different types of input.

Processing stages of an AV1 encoder with relevant technologies associated with each stage.

The Alliance published a reference implementation written in C and assembly language (aomenc, aomdec) as free software under the terms of the BSD 2-Clause License.[40] Development happens in public and is open for contributions, regardless of AOM membership.

The development process was such that coding tools were added to the reference code base as experiments, controlled by flags that enable or disable them at build time, for review by other group members as well as specialized teams that helped with and ensured hardware friendliness and compliance with intellectual property rights (TAPAS). When the feature gained some support in the community, the experiment was enabled by default, and ultimately had its flag removed when all of the reviews were passed.[41] Experiment names were lowercased in the configure script and uppercased in conditional compilation flags.[citation needed]

To better and more reliably support HDR and color spaces, corresponding metadata can now be integrated into the video bitstream instead of being signaled in the container.

Partitioning

[

edit

]

10 ways for subpartitioning coding units – into squares (recursively), rectangles, or mixtures thereof (“T-shaped”).

Frame content is separated into adjacent same-sized blocks referred to as superblocks. Similar to the concept of a macroblock, superblocks are square-shaped and can either be of size 128×128 or 64×64 pixels. Superblocks can be divided in smaller blocks according to different partitioning patterns. The four-way split pattern is the only pattern whose partitions can be recursively subdivided. This allows superblocks to be divided into partitions as small as 4×4 pixels.

Diagram of the AV1 superblock partitioning. It shows how 128×128 superblocks can be split all the way down to 4×4 blocks. As special cases, 128×128 and 8×8 blocks can’t use 1:4 and 4:1 splits, and 8×8 blocks can’t use “T”-shaped splits.

“T-shaped” partitioning patterns are introduced, a feature developed for VP10, as well as horizontal or vertical splits into four stripes of 4:1 and 1:4 aspect ratio. The available partitioning patterns vary according to the block size, both 128×128 and 8×8 blocks can’t use 4:1 and 1:4 splits. Moreover, 8×8 blocks can’t use “T” shaped splits.

Two separate predictions can now be used on spatially different parts of a block using a smooth, oblique transition line (wedge-partitioned prediction).[citation needed] This enables more accurate separation of objects without the traditional staircase lines along the boundaries of square blocks.

More encoder parallelism is possible thanks to configurable prediction dependency between tile rows (ext_tile).[42]

Prediction

[

edit

]

AV1 performs internal processing in higher precision (10 or 12 bits per sample), which leads to quality improvement by reducing rounding errors.

Predictions can be combined in more advanced ways (than a uniform average) in a block (compound prediction), including smooth and sharp transition gradients in different directions (wedge-partitioned prediction) as well as implicit masks that are based on the difference between the two predictors. This allows the combination of either two inter predictions or an inter and an intra prediction to be used in the same block.[43][citation needed]

A frame can reference 6 instead of 3 of the 8 available frame buffers for temporal (inter) prediction while providing more flexibility on bi-prediction[44] (ext_refs[citation needed]).

Warped motion as seen from the front of a train.

The Warped Motion (warped_motion)[42] and Global Motion (global_motion[citation needed]) tools in AV1 aim to reduce redundant information in motion vectors by recognizing patterns arising from camera motion.[42] They implement ideas that were attempted in preceding formats like e.g. MPEG-4 ASP, albeit with a novel approach that works in three dimensions. There can be a set of warping parameters for a whole frame offered in the bitstream, or blocks can use a set of implicit local parameters that get computed based on surrounding blocks.

Switch frames (S-frame) are a new inter-frame type that can be predicted using already-decoded reference frames from a higher-resolution version of the same video to allow switching to a lower resolution without the need for a full keyframe at the beginning of a video segment in the adaptive bitrate streaming use case.[45]

Intra prediction

[

edit

]

Intra prediction consists of predicting the pixels of given blocks only using information available in the current frame. Most often, intra predictions are built from the neighboring pixels above and to the left of the predicted block. The DC predictor builds a prediction by averaging the pixels above and to the left of block.

Directional predictors extrapolate these neighboring pixels according to a specified angle. In AV1, 8 main directional modes can be chosen. These modes start at an angle of 45 degrees and increase by a step size of 22.5 degrees up until 203 degrees. Furthermore, for each directional mode, six offsets of 3 degrees can be signaled for bigger blocks, three above the main angle and three below it, resulting in a total of 56 angles (ext_intra).

The “TrueMotion” predictor was replaced with a Paeth predictor which looks at the difference from the known pixel in the above-left corner to the pixel directly above and directly left of the new one and then chooses the one that lies in direction of the smaller gradient as predictor. A palette predictor is available for blocks with up to 8 dominant colors, such as some computer screen content. Correlations between the luminosity and the color information can now be exploited with a predictor for chroma blocks that is based on samples from the luma plane (cfl).[42] In order to reduce visible boundaries along borders of inter-predicted blocks, a technique called overlapped block motion compensation (OBMC) can be used. This involves extending a block’s size so that it overlaps with neighboring blocks by 2 to 32 pixels, and blending the overlapping parts together.[46]

Data transformation

[

edit

]

To transform the error remaining after prediction to the frequency domain, AV1 encoders can use square, 2:1/1:2, and 4:1/1:4 rectangular DCTs (rect_tx),[44] as well as an asymmetric DST[47][48][49] for blocks where the top and/or left edge is expected to have lower error thanks to prediction from nearby pixels, or choose to do no transform (identity transform).

It can combine two one-dimensional transforms in order to use different transforms for the horizontal and the vertical dimension (ext_tx).[42][44]

Quantization

[

edit

]

AV1 has new optimized quantization matrices (aom_qm).[50] The eight sets of quantization parameters that can be selected and signaled for each frame now have individual parameters for the two chroma planes and can use spatial prediction. On every new superblock, the quantization parameters can be adjusted by signaling an offset.

Filters

[

edit

]

In-loop filtering combines Thor’s constrained low-pass filter and Daala’s directional deringing filter into the Constrained Directional Enhancement Filter, cdef. This is an edge-directed conditional replacement filter that smooths blocks roughly along the direction of the dominant edge to eliminate ringing artifacts.[citation needed]

There is also the loop restoration filter (loop_restoration) based on the Wiener filter and self-guided restoration filters to remove blur artifacts due to block processing.[42]

Film grain synthesis (film_grain) improves coding of noisy signals using a parametric video coding approach.Due to the randomness inherent to film grain noise, this signal component is traditionally either very expensive to code or prone to get damaged or lost, possibly leaving serious coding artifacts as residue. This tool circumvents these problems using analysis and synthesis, replacing parts of the signal with a visually similar synthetic texture based solely on subjective visual impression instead of objective similarity. It removes the grain component from the signal, analyzes its non-random characteristics, and instead transmits only descriptive parameters to the decoder, which adds back a synthetic, pseudorandom noise signal that’s shaped after the original component. It is the visual equivalent of the Perceptual Noise Substitution technique used in AC3, AAC, Vorbis, and Opus audio codecs.

Entropy coding

[

edit

]

Daala’s entropy coder (daala_ec[citation needed]), a non-binary arithmetic coder, was selected for replacing VP9’s binary entropy coder. The use of non-binary arithmetic coding helps evade patents but also adds bit-level parallelism to an otherwise serial process, reducing clock rate demands on hardware implementations.[citation needed] This is to say that the effectiveness of modern binary arithmetic coding like CABAC is being approached using a greater alphabet than binary, hence greater speed, as in Huffman code (but not as simple and fast as Huffman code).AV1 also gained the ability to adapt the symbol probabilities in the arithmetic coder per coded symbol instead of per frame (ec_adapt).[42]

Scalable video coding

[

edit

]

(↑it is Mpeg4/H264/AVC specific version of General technique: Layered coding↑)

Of main importance to video conferencing, scalable video coding is a general technique, not unique to AV1, of restricting and structuring video frame dependencies so that one or more lower bitrate video streams are extractable from a higher bitrate stream with better quality. This differs from adaptive bitrate streaming in that some compression efficiency in each higher bitrate adaptation is given up for the benefit of the overall stream. The encoding process is also less redundant and demanding.

AV1 has provisions for temporal and spatial scalability.[51] This is to say that both framerate and resolution are usable ways to define a lower bitrate substream.

Quality and efficiency

[

edit

]

A first comparison from the beginning of June 2016[52] found AV1 roughly on par with HEVC, as did one using code from late January 2017.[53]

In April 2017, using the 8 enabled experimental features at the time (of 77 total), Bitmovin was able to demonstrate favorable objective metrics, as well as visual results, compared to HEVC on the Sintel and Tears of Steel short films.[54] A follow-up comparison by Jan Ozer of Streaming Media Magazine confirmed this, and concluded that “AV1 is at least as good as HEVC now”.[55] Ozer noted that his and Bitmovin’s results contradicted a comparison by Fraunhofer Institute for Telecommunications from late 2016[56] that had found AV1 65.7% less efficient than HEVC, underperforming even H.264/AVC which they concluded as being 10.5% more efficient. Ozer justified this discrepancy by having used encoding parameters endorsed by each encoder vendor, as well as having more features in the newer AV1 encoder.[56] Decoding performance was at about half the speed of VP9 according to internal measurements from 2017.[45]

Tests from Netflix in 2017, based on measurements with PSNR and VMAF at 720p, showed that AV1 was about 25% more efficient than VP9 (libvpx).[57] Tests from Facebook conducted in 2018, based on PSNR, showed that the AV1 reference encoder was able to achieve 34%, 46.2% and 50.3% higher data compression than libvpx-vp9, x264 High profile, and x264 Main profile respectively.[58][3]

Tests from Moscow State University in 2017 found that VP9 required 31% and HEVC 22% more bitrate than AV1 in order to achieve similar levels of quality.[59] The AV1 encoder was operating at speed “2500–3500 times lower than competitors” due to the lack of optimization (which was not available at that time).[60]

Tests from University of Waterloo in 2020 found that when using a mean opinion score (MOS) for 2160p (4K) video AV1 had bitrate saving of 9.5% compared to HEVC and 16.4% compared to VP9. They also concluded that at the time of the study at 2160p the AV1 video encodes on average took 590× longer compared to encoding with AVC; while HEVC took on average 4.2× longer and VP9 took on average 5.2× longer than AVC respectively.[61][62]

The latest encoder comparison by Streaming Media Magazine as of September 2020, which used moderate encoding speeds, VMAF, and a diverse set of short clips, indicated that the open-source libaom and SVT-AV1 encoderstook about twice as long time to encode as x265 in its “veryslow” preset while using 15-20% less bitrate, or about 45% less bitrate than x264 veryslow. The best-in-test AV1 encoder, Visionular’s Aurora1, in its “slower” preset, was as fast as x265 veryslow while saving 50% bitrate over x264 veryslow.[63]

CapFrameX tested the performance of the GPUs with the AV1 decoding.[64] On October 5, 2022, Cloudflare announced that it has a beta player.[65]

Profiles and levels

[

edit

]

Profiles

[

edit

]

AV1 defines three profiles for decoders which are Main, High, and Professional. The Main profile allows for a bit depth of 8- or 10-bits per sample with 4:0:0 (greyscale) and 4:2:0 (quarter) chroma sampling. The High profile further adds support for 4:4:4 chroma sampling (no subsampling). The Professional profile extends capabilities to full support for 4:0:0, 4:2:0, 4:2:2 (half) and 4:4:4 chroma sub-sampling with 8, 10 and 12 bit color depths.[16]

Feature comparison between AV1 profilesMain (0)High (1)Professional (2)Bit depth8 or 10-bit8 or 10-bit8, 10 & 12 bitChroma subsampling4:0:0YesYesYes4:2:0YesYesYes4:2:2NoNoYes4:4:4NoYesYes

Levels

[

edit

]

AV1 defines levels for decoders with maximum variables for levels ranging from 2.0 to 6.3.[66] The levels that can be implemented depend on the hardware capability.

Example resolutions would be 426×[email protected] fps for level 2.0, 854×[email protected] fps for level 3.0, 1920×[email protected] fps for level 4.0, 3840×[email protected] fps for level 5.1, 3840×[email protected] fps for level 5.2, and 7680×[email protected] fps for level 6.2. Level 7 has not been defined yet.[67]

seq_level_idxLevelMaxPicSize
(Samples)MaxHSize
(Samples)MaxVSize
(Samples)MaxDisplayRate
(Hz)MaxDecodeRate
(Hz)MaxHeader
Rate (Hz)MainMbps
(Mbit/s)HighMbps
(Mbit/s)Min Comp BasisMax TilesMax Tile ColsExample02.0147456204811524,423,6805,529,6001501.5-284426×[email protected],363,52010,454,4001503.0-284640×[email protected],975,68024,969,6001506.0-2166854×[email protected],950,72039,938,40015010.0-21661280×[email protected],778,88077,856,76830012.030.043281920×[email protected],557,760155,713,53630020.050.043281920×[email protected],386,880273,715,20030030.0100.066483840×[email protected],773,760547,430,40030040.0160.086483840×[email protected]521,069,547,5201,094,860,80030060.0240.086483840×[email protected],069,547,5201,176,502,27230060.0240.086483840×[email protected],069,547,5201,176,502,27230060.0240.08128167680×[email protected],139,095,0402,189,721,600300100.0480.08128167680×[email protected],278,190,0804,379,443,200300160.0800.08128167680×[email protected],278,190,0804,706,009,088300160.0800.08128167680×[email protected]

Supported container formats

[

edit

]

Standardized

  • ISO base media file format:[68] the ISOBMFF containerization spec by AOMedia was the first to be finalized and the first to gain adoption. This is the format used by YouTube.
  • Matroska: version 1 of the Matroska containerization spec[69] was published in late 2018.[70]

Unfinished standards

  • MPEG Transport Stream (MPEG TS)[71]
  • Real-time Transport Protocol: a preliminary RTP packetization spec by AOMedia defines the transmission of AV1 OBUs directly as the RTP payload.[51] It defines an RTP header extension that carries information about video frames and their dependencies, which is of general usefulness to § scalable video coding. The carriage of raw video data also differs from for example MPEG TS over RTP in that other streams, such as audio, must be carried externally.

Not standardized

  • WebM: as a matter of formality, AV1 has not been sanctioned into the subset of Matroska known as WebM as of late 2019.[72] However support has been present in libwebm since May 2018.[73]
  • On2 IVF: this format was inherited from the first public release of VP8, where it served as a simple development container.[74] rav1e also supports this format.[75]
  • Pre-standard WebM: Libaom featured early support for WebM before Matroska containerization was specified; this has since been changed to conform to the Matroska spec.[76]

Adoption

[

edit

]

Content providers

[

edit

]

In October 2016, Netflix stated they expected to be an early adopter of AV1.[77] On 5 February 2020, Netflix began using AV1 to stream select titles on Android, providing 20% improved compression efficiency over their VP9 streams.[78] On 9 November 2021, Netflix announced it had begun streaming AV1 content to a number of TVs with AV1 decoders as well as the PlayStation 4 Pro.[79]

A YouTube video statistics with AV1 video codec and Opus audio codec

In 2018, YouTube began rolling out AV1, starting with its AV1 Beta Launch Playlist. According to the description, the videos are (to begin with) encoded at high bitrate to test decoding performance, and YouTube has “ambitious goals” for rolling out AV1. YouTube for Android TV supports playback of videos encoded in AV1 on capable platforms as of version 2.10.13, released in early 2020.[80]

In February 2019, Facebook, following their own positive test results, said they would gradually roll out AV1 as soon as browser support emerges, starting with their most popular videos.[58] Also, Meta (a Facebook company) is said to be interested in the SVT-AV1 as Google engineer Matt Frost said in an Intel YouTube video. The intention was carrying out a first test in 2023,[81] when the HW will be widespread, but it hasn’t expressed statement in a latest Streaming Media video (Jan Ozer). In the meantime MSVP (Meta Scalable Video Processor) was announced[82] and the article was published on 2022-10-15 or the 14th day.A tech blog post was made on the 4th November 2022 followed the same day by a post by Mark Zuckerberg on Instagram Reels, which demonstrates the new AV1 codec in action. This would be helpful on slower internet connections, but it improves the experience for everyone.[83][84]

In June 2019, Vimeo’s videos in the “Staff picks” channel were available in AV1.[85] Vimeo is using and contributing to Mozilla’s Rav1e encoder and expects, with further encoder improvements, to eventually provide AV1 support for all videos uploaded to Vimeo as well as the company’s “Live” offering.[85]

On 30 April 2020, iQIYI announced support for AV1 for users on PC web browsers and Android devices, according to the announcement, as the first Chinese video streaming site to adopt the AV1 format.[86]

Twitch plans to roll out AV1 for its most popular content in 2022 or 2023, with universal support projected to arrive in 2024 or 2025.[87]

In April 2021, Roku removed the YouTube TV app from the Roku streaming platform after a contract expired. It was later reported that Roku streaming devices do not use processors that support the AV1 codec. In December 2021, YouTube and Roku agreed to a multiyear deal to keep both the YouTube TV app and the YouTube app on the Roku streaming platform. Roku had argued that using processors in their streaming devices that support the royalty-free AV1 codec would increase costs to consumers.[88][89]

Software implementations

[

edit

]

  • Libaom is the reference implementation. It includes an encoder (aomenc) and a decoder (aomdec). As the former research codec, it has the advantage of being made to justifiably demonstrate efficient use of every feature, but at the general cost of encoding speed. At feature freeze, the encoder had become problematically slow, but dramatic speed optimizations with negligible efficiency impact have subsequently been made.[90][19]
  • SVT-AV1 includes an open-source encoder and decoder developed primarily by Intel in collaboration with Netflix[91][92] with a special focus on threading performance. They implemented in Cidana Corporation (Cidana Developers) and Software Implementation Working Group (SIWG). In August, 2020, the Alliance for Open Media Software Implementation Working Group adopted SVT-AV1 as their production encoder.[93] SVT-AV1 1.0.0 was released on April 22, 2022.[94]
  • rav1e is an encoder written in Rust and assembly language.[75] rav1e takes the opposite developmental approach to aomenc: start out as the simplest (therefore fastest) conforming encoder, and then improve efficiency over time while remaining fast.[90]
  • dav1d is a decoder written in C99 and assembly focused on speed and portability.[95] The first official version (0.1) was released in December 2018.[96] Version 0.2 was released in March 2019, with users able to “safely use the decoder on all platforms, with excellent performance”, according to the developers.[97] Version 0.3 was announced in May 2019 with further optimizations demonstrating performance 2 to 5 times faster than aomdec.[98] Version 0.5 was released in October 2019.[99] Firefox 67 switched from Libaom to dav1d as a default decoder in May 2019.[100] In 2019, dav1d v0.5 was rated the best decoder in comparison to libgav1 and libaom.[101] dav1d 0.9.0 was released on May 17, 2021.[102] dav1d 0.9.2 was released on September 3, 2021.[103] dav1d 1.0.0 was released on March 18, 2022.[104]
  • Cisco AV1 is a proprietary live encoder that Cisco developed for its Webex teleconference products. The encoder is optimized for latency[105] and the constraint of having a “usable CPU footprint”, as with a “commodity laptop”.[106] Cisco stressed that at their operating point – high speed, low latency – the large toolset of AV1 does not preclude a low encoding complexity.[105] Rather, the availability of tools for screen content and scalability in all profiles enabled them to find good compression-to-speed tradeoffs, better even than with HEVC.[106] Compared to their previously deployed H.264 encoder, a particular area of improvement was in high resolution screen sharing.[105]
  • libgav1 is a decoder written in C++11 released by Google.[107]

Several other parties have announced to be working on encoders, including EVE for AV1 (in beta testing),[108] NGCodec,[109] Socionext,[110] Aurora[111] and MilliCast.[112]

Software support

[

edit

]

Web browsers

Video players

Encoder front-ends

  • FFmpeg (libaom support since version 4.0,[127] rav1e support since version 4.3,[128] SVT-AV1 support since version 4.4[129])
  • HandBrake (since version 1.3.0, 9 November 2019; decoding support)[130]
  • Also on GitHub there are several FFmpeg and/or Avisynth GUI frontends such as Fastflix, StaxRip, Hybrid, FFmpeg Batch AV converter, the GUI for av1an Nmkoder that has also an only FFmpeg mode and Shutter Encoder GUI in his site.
  • Bitmovin Encoding (since version 1.50.0, 4 July 2018)[131]

Video editors

  • DaVinci Resolve (since version 17.2, May 2021, decoding support; since version 17.4.6, March 2022, Intel Arc hardware encoding support, since version 18.1, November 2022, Nvidia hardware encoding support)

Others

Operating system support

[

edit

]

Hardware

[

edit

]

Several Alliance members demonstrated AV1 enabled products at IBC 2018,[181][182] including Socionext’s hardware accelerated encoder. According to Socionext, the encoding accelerator is FPGA based and can run on an Amazon EC2 F1 cloud instance, where it runs 10 times faster than existing software encoders.

According to Mukund Srinivasan, chief business officer of AOM member Ittiam, early hardware support will be dominated by software running on non-CPU hardware (such as GPGPU, DSP or shader programs, as is the case with some VP9 hardware implementations), as fixed-function hardware will take 12–18 months after bitstream freeze until chips are available, plus 6 months for products based on those chips to hit the market.[41] The bitstream was finally frozen on 28 March 2018, meaning chips could be available sometime between March and August 2019.[22] According to the above forecast, products based on chips could then be on the market at the end of 2019 or the beginning of 2020.

  • On 7 January 2019, NGCodec announced AV1 support for NGCodec accelerated with Xilinx FPGAs.[109]
  • On 18 April 2019, Allegro DVT announced its AL-E210 multi-format video encoder hardware IP, the first publicly announced hardware AV1 encoder.[183][184]
  • On 23 April 2019, Rockchip announced their RK3588 SoC which features AV1 hardware decoding up to 4K 60fps at 10-bit color depth.[178]
  • On 9 May 2019, Amphion announced a video decoder with AV1 support up to 4K 60fps[185] On 28 May 2019, Realtek announced the RTD2893, its first integrated circuit with AV1 decoding, up to 8K.[176][177]
  • On 17 June 2019, Realtek announced the RTD1311 SoC for set-top boxes with an integrated AV1 decoder.[175]
  • On 20 October 2019, a roadmap from Amlogic shown 3 set-top box SoCs that are able to decode AV1 content, the S805X2, S905X4 and S908X.[186] The S905X4 was used in the SDMC DV8919 by December.[187]
  • On 21 October 2019, Chips&Media announced the WAVE510A VPU supporting decoding AV1 at up to 4Kp120.[188]
  • On 26 November 2019, MediaTek announced world’s first smartphone SoC with an integrated AV1 decoder.[160] The Dimensity 1000 is able to decode AV1 content up to 4K 60fps.
  • On 3 January 2020, LG Electronics announced that its 2020 8K TVs, which are based on the α9 Gen 3 processor, support AV1.[189][190]
  • At CES 2020, Samsung announced that its 2020 8K QLED TVs, featuring Samsung’s “Quantum Processor 8K SoC,” are capable of decoding AV1.[191]
  • On 13 August 2020, Intel announced that their Intel Xe-LP GPU in Tiger Lake will be their first product to include AV1 fixed-function hardware decoding.[155][154]
  • On 1 September 2020, Nvidia announced that their Nvidia GeForce RTX 30 Series GPUs will support AV1 fixed-function hardware decoding.[168]
  • On 2 September 2020, Intel officially launched Tiger Lake 11th Gen CPUs with AV1 fixed-function hardware decoding.[192]
  • On 15 September 2020, AMD merged patches into the amdgpu drivers for Linux which adds support for AV1 decoding support on RDNA2 GPUs.[141][193][194]
  • On 28 September 2020, Roku refreshed the Roku Ultra including AV1 support.[195]
  • On 30 September 2020, Intel released version 20.3.0 for the Intel Media Driver which added support for AV1 decoding on Linux.[152][153][196]
  • On 10 October 2020, Microsoft confirmed support for AV1 hardware decoding on Xe-LP(Gen12), Ampere and RDNA2 with a blog post.[142]
  • On 11 January 2021, Intel announes new Pentium and Celeron models with 11th Gen UHD iGPU with the capability to support AV1 decode.[197]
  • On January 12, 2021, Samsung announced the Exynos 2100 with claimed AV1 decode support, however Samsung has not implemented AV1 support yet.[198][151]
  • On 16 March 2021, Intel officially launched Rocket Lake 11th Gen CPUs with AV1 fixed-function hardware decoding.[199]
  • On October 19, 2021, Google officially launched the Tensor featuring BigOcean supporting AV1 fixed-function hardware decoding.[200][151]
  • On 27 October 2021, Intel officially launched Alder Lake 12th Gen CPUs with AV1 fixed-function hardware decoding.[201]
  • On 4 January 2022, Intel officially launched Alder Lake 12th Gen mobile CPUs and non-K series desktop CPUs with AV1 fixed-function hardware decoding.[202]
  • On February 17, 2022, Intel officially announced that Arctic Sound-M has the industry’s first hardware-based AV1 encoder inside a GPU.[203]
  • On March 30, 2022, Intel officially announced the Intel Arc Alchemist family with AV1 fixed-function hardware decoding and fixed-function hardware encoding.[204][205][206]
  • On September 20, 2022, Nvidia officially announced the Nvidia GeForce RTX 40 series with AV1 fixed-function hardware decoding and fixed-function hardware encoding.[171][172][173]
  • On September 22, 2022, Google released the Chromecast with Google TV (HD), the first Chromecast device with support for AV1 hardware decoding.[207]
  • On September 26, 2022, AMD released Ryzen 7000 series CPUs with an embedded GPU capable of AV1 hardware decoding.[144]
  • On 27 September 2022, Intel officially launched Raptor Lake 13th Gen CPUs with AV1 fixed-function hardware decoding.[208]

Patent claims

[

edit

]

Sisvel, a Luxembourg-based company, has formed a patent pool, and are selling a patent license for AV1.The pool was announced in early 2019,[209] but a list of claimed patents was first published on 10 March 2020.[210] This list contains over 1050 patents.[210]The substance of the patent claims remains to be challenged.[211] Sisvel has stated that they won’t seek content royalties, but their license makes no exemption for software.[210][211]

As of March 2020 , the Alliance for Open Media has not responded to the list of patent claims. Their statement after Sisvel’s initial announcement reiterated the commitment to their royalty-free patent license and made mention of the “AOMedia patent defense program to help protect AV1 ecosystem participants in the event of patent claims”, but did not mention the Sisvel claim by name.[212]

According to The WebM Project, Google does not plan to alter their current or upcoming usage plans of AV1 even though they are aware of the patent pool, and third parties cannot be stopped from demanding licensing fees from any technology that is open-source, royalty-free, and/or free-of-charge.[213]

On July 7, 2022, it was revealed that the European Union’s antitrust regulators had opened an investigation into AOM and its licensing policy. It said this action may be restricting the innovators’ ability to compete with the AV1 technical specification, and also eliminate incentives for them to innovate.[214]

The Commission has information that AOM and its members may be imposing licensing terms (mandatory royalty-free cross licensing) on innovators that were not a part of AOM at the time of the creation of the AV1 technical, but whose patents are deemed essential to (its) technical specifications

AV1 Image File Format (AVIF)

[

edit

]

AV1 Image File Format (AVIF) is an image file format specification for storing images or image sequences compressed with AV1 in the HEIF file format.[215] It competes with HEIC which uses the same container format, built upon ISOBMFF, but HEVC for compression.

See also

[

edit

]

  • Versatile Video Coding – a competing proprietary codec

References

[

edit

]

Written by Jane