Setting Up the Video Encoder Application
Following is some setup guidelines shared by Akamai in order to help you if you are still encountering issues while streaming after following to the word our encoder setup walkthrough or need to have more informations on the setup you desire.
In This Chapter:
B) Supported Codecs
C) Stream Redundancy
D) Encoding Guidelines
After you complete your live stream setup, you will have the information to configure your video encoder to capture and push your stream to the entry points. Several third-party live video encoders are qualified for Media Services Live: Stream Packaging.
Streaming technology allows ingest of the RTMP protocol while outputting Apple Live Streaming segments (full compatibility with RTMP in/HLS out) requires support of video in H.264 format and audio in AAC-LE format.
If you use an encoder other than those listed in the Dacast platform you must ensure that it issues the onMetaData packet as part of its broadcast, and that the packet includes the following information, or you will experience errors:
• audiodatarate -->If your stream is audio-only, the encoder should remove the video-related entries:
• audiodatarate -->Similarly, if the stream is video-only, the encoder should remove the audio-related properties:
A codec is a device or computer program for encoding or decoding a digital data stream or signal.
A codec encodes a data stream or a signal for transmission and storage, possibly in encrypted form, and the decoder function reverses the encoding for playback or editing. Codecs are used in videoconferencing, streaming media, and video editing applications.
Media Services Live Stream Packaging supports the following codecs:
HLS Output: Video: H.264 Baseline Profile Level 3.0, Main Profile Level 3.1, High Profile Level 4.1, and MPEG-4 Simple Profile
Audio: Refer to Apple® Technical Note at: https://developer.apple.com/library/ios/#technotes/tn2224/_index.html
C) Stream Redundancy
It is highly recommended to use a Backup encoder in case there are issues with the Primary broadcast.
If you use a single encoder to broadcast to both primary and backup entry points, your set up lacks redundancy. If the encoder fails, both primary and backup broadcasts will cease. To avoid this, consider using two encoders, with one broadcasting to the primary entry point and the other to the backup. However, do not use both encoders to broadcast to both primary and backup entry points, as the two broadcasts will conflict.
For optimal performance, maximum of 20 streams per stream ID is recommended.
The following sections provide recommendations for the following:
D1)GOP (Group of Pictures) Interval
D6)Encoder CPU Load
D1) GOP (Group of Pictures) Interval
Flash can only change bit rates at GOP intervals. As a result, you must carefully select how far apart you space them. GOP can also be called keyframes or I-frames in your encoding software.
- If you space your intervals closer then two seconds, your video will react quite quickly to heuristics-recommended bit rate changes, but quality will suffer slightly.
- If you space the intervals too far apart, your video will often react too slowly for the heuristics changes, and seeking ability will also suffer. It is recommended to space your GOP between two and four seconds apart, drifting closer to two as the bit rate increases.
It is also recommended that you keep your GOPs static across all bit rates, as success has been to use GOP intervals that change as the bit rate changes. However, this might cause a small amount of additional load on the server and can affect the time at which the server is able to switch to the new bit rate.
Also, GOPs should be closed and of a constant size, and the audio for your videos should be encoded at the same bit rate and sample rate.
D2) Frame Rate
Frame rate on your encoder should be configured based on target devices to which the live content will be served. To avoid performance issues and enhance better playback quality, it is recommended you use a 30-frames-per-second frame rate.
D3) Scene Detection
Some encoders have an option to enable scene change detection, which allows IDR key frames to be inserted when a scene change occurs. This improves visual quality by allowing the entire frame to be redrawn when necessary. Due to the extra keyframes/IDRs possible with this setting, it is possible to raise the overall bit rate of the video, thereby increasing the likelihood of rebuffering. Also, in order to maintain the configured bit rate of the video during live encoding, the encoder can only include extra keyframes/IDRs for higher bit rates. This might potentially cause switching issues during playback, and it is therefore recommended you disable scene change detection for live streams.
D4) Timestamp Alignment
If you are using multiple encoders for your Dynamic Streaming event, you must align their time-stamps so they are in agreement.
D5) Interlaced Content
Interlacing is often found on content originally created for display on television, as opposed to a digital device. This type of footage is created by running “half frames” at twice the frame rate, by drawing every other line and then filling in the remaining lines on a second pass. On a digital screen, both frames must be combined and displayed at the same time. This results in noticeable lines in the footage, which is particularly bad when there is motion in the video.
There are several methods available to de-interlace content, each with its own benefits, but the recommendation is to correct the interlaced footage as early in the production process as possible to ensure the highest quality. Due to how the deinterlacing process works, it is very important that it be done before applying additional modifications such as frame scaling. Attempting to de-interlace footage that has been modified from its original state will produce noticeably bad results.
D6) Encoder CPU Load
For live encoding, encoders are generally configured to transcode the input stream into multiple renditions (characteristics such as, bit rate, frame size might be different across renditions.). Configuring many renditions and enabling many processing filters can increase load on the encoder, and at higher loads, the encoder can possibly lag behind in publishing content, can miss alignment, and can drop frames. So, it is best if the peak CPU usage on your encoders does not exceed 70%.