What is HLS Streaming and When Should You Use It? [2020 Update]

hls streaming protocol

In the past, Adobe’s Flash video technology was the main method of delivering video via the internet. In recent years, however, there’s been a major shift in the world of online video. Specifically, online video delivered by protocols like HLS streaming and played by HTML5 video players has increasingly replaced Adobe’s Flash protocol.

For both broadcasters and viewers, this is a largely positive change. Why? First, HTML5 and HLS are open specifications. Therefore, users can modify them to their specifications and anyone can access them free of cost. Additionally, these newer HTML5 and HLS streaming protocols are safer, more reliable, and faster than earlier technologies.

For content producers, HTML5 and HLS live streaming technologies also bring some major advantages. However, there are also some notable disadvantages within this realm of content production. In particular, it can take considerable time and effort to replace legacy systems and technologies with new standards, which may not work the same across all platforms. As with all technological innovations, growing pains are inevitable.

To get you up to speed on these changes, we’ve framed this article for both longtime professional broadcasters and newcomers to streaming media.  Whether you do live streaming of pro events, or you want to stream live from your website, we’ve got you covered. Overall, our focus here is on HLS video streaming

We’ll also discuss the role of HTML5 video streaming as it relates to HLS. In particular, we’ll cover basic streaming protocol definitions, other streaming protocols, and provide a detailed overview of the main topic of this post: what is HLS streaming and when should you use it?

Please note that we have updated this post to reflect the most current information about HLS streaming for 2020. We have added information about additional protocols, M3U8, HTML5 video players and more.

Table of Contents:

  • What is HLS Streaming?
  • A Basic Breakdown: How Does HLS Work?
  • Technical Overview of HLS Streaming
  • Key Benefits of HLS Streaming
  • Comparing HLS Streaming to Other Video Streaming Protocols
  • Advantages of HLS Over Other Protocols
  • Devices and Browsers that Support HLS
  • When to Use HLS
  • Building an RTMP to HLS Workflow
  • HTML5 Video Streaming with HLS
  • The Future of Live Streaming
  • Conclusion

What is HLS Streaming and How Does it Work?

what is hls streaming
HLS is a live streaming protocol that is considered the video delivery “technology of now.”

HLS stands for HTTP Live Streaming. In short, HLS is a media streaming protocol for delivering visual and audio media to viewers over the internet.

Apple first launched the HTTP live streaming (HLS) protocol in the summer of 2009. Apple timed this release to coincide with the debut of the iPhone 3. As you may recall, previous iPhone models had experienced many problems with streaming media online. These issues arose, at least in part, because those older devices often switched between Wi-Fi and mobile networks mid-stream.

Prior to the release of HLS, Apple used the Quicktime Streaming Server as its media streaming standard. Though a robust service, Quicktime used non-standard ports for data transfer and so firewalls often blocked its RTSP protocol. 

Combined with slow average internet speeds, these limitations doomed Quicktime Streaming Server. As a result, this early experiment in live streaming technology never reached a wide audience. 

That said, HTTP Live Streaming ultimately drew from the lessons learned from creating and rolling out the Quicktime service.

A Basic Breakdown: How Does HLS Work?

We’ve covered the matter-of-fact definition of HLS, but before we move on to an equally technical overview of how this protocol works, we’re going to take a chance to go back to the basics.

As we’ve mentioned, HLS is an important protocol for live streaming. The live streaming process that is compatible with the greatest number of devices and browsers looks something a little bit like this:

  1. Capturing devices (cameras, microphones, etc.) capture the content.
  2. The content is sent from the capturing device to a live encoder. 
  3. The encoder transmits the content to the video hosting platform via RTMP.
  4. The video hosting platform uses HLS to transmit the content to an HTML5 video player.

This process requires two main software solutions: a live video encoder and a powerful video hosting platform. If you choose to stream with HLS, you’ll want to make sure that both software offers the protocols and features we mentioned.

HTML5 video players powered by HLS are great for reaching the largest audience since this duo is practically universal. Dacast is a feature-rich live video streaming solution that includes HLS streaming and a customizable, white-label HTML5 video player.

Want to see the HLS streaming protocol in action? Sign up for your free trial of Dacast to try out HLS and other powerful streaming features.

START MY TRIAL

Technical Overview of HLS Streaming

With that background in mind, how does HLS streaming technology work?

First, the HLS protocol chops up MP4 video content into short (10-second) chunks with the .ts file extension (MPEG2 Transport Stream). Next, an HTTP server stores those streams, and HTTP delivers these short clips to viewers on their devices. 

HLS will play video encoded with the H.264 or HEVC/H.265 codecs.

The HTTP server also creates an M3U8 playlist file (e.g. manifest file) that serves as an index for the video chunks.  That way, even if you choose to broadcast live using only a single quality option, the file will still exist.

Now, let’s consider how playback quality works with HLS video streaming. With this protocol, a given user’s video player software (like an HTML5 video player) detects deteriorating or improving network conditions. 

If either occurs, the player software first reads the main index playlist and determines which quality video is ideal. Then the software reads the quality-specific index file to determine which chunk of video corresponds to the point at which the viewer is watching. If you’re streaming with Dacast, you can use your M3U8 online player to test your HLS stream. Though this may sound technically complex, the entire process is seamless for the user.

Key Benefits of HLS Streaming

hls streaming
HLS streaming is laden with benefits for professional broadcasters and newcomers to the live and on demand video content world alike.

 

One key benefit of this protocol relates to its compatibility features. Unlike other streaming formats, HLS is compatible with a wide range of devices and firewalls. However, latency (or lag time) tends to be in the 15-30 second range with HLS live streams. 

This is certainly an important factor to keep in mind. Note that Dacast now offers an HLS direct low latency streaming feature, which works with any HLS-compatible encoder.

When it comes to quality, versatility makes HLS video streaming stand out from the pack. On the server-side, content creators often have the option to encode the same live stream at multiple quality settings. 

In turn, viewers can dynamically request the best option available, given their specific bandwidth at any given moment. In other words, from chunk to chunk the data quality can differ to fit different streaming device capabilities.

This is best explained with an example. Let’s say, in one moment, you’re sending a full high-definition video. Moments later, a mobile user encounters a “dead zone” in which their quality of service declines. With HLS streaming, this is not an issue. The player will detect this decline in bandwidth and instead deliver lower-quality movie chunks at this time.

HLS also supports closed captions embedded in the video stream. To learn more about the technical aspects of HLS, we recommend the extensive documentation and best practices provided by Apple.

Comparing HLS Streaming to Other Video Streaming Protocols

HLS protocol (HTTP live streaming) comparison
Wondering how HTTP live video streaming holds up against other protocols? Let’s check it out.

Several companies have developed a variety of streaming solutions through the use of media streaming protocols. Generally, each of these solutions has represented a new innovation in the field of video streaming. However, similar to the HD-DVD vs. Blu-Ray format wars, or the even older Betamax vs. VHS showdown, industry conflicts can arise.

HLS is currently the best option for streaming media protocols, but it wasn’t always that way—nor will it remain so forever. Let’s review several past and current streaming protocols to better understand the innovations that the HLS streaming protocol offers today.

1. HDS

Known as Adobe’s next-gen streaming, HDS actually stands for HTTP Dynamic Streaming. This protocol was designed specifically for compatibility with Adobe’s Flash video browser plug-in. Therefore, the overall adoption of HDS is relatively small compared to HLS.

Here at Dacast, we use HDS to deliver some of our VOD (Video On Demand) content. For devices and browsers that do support Flash video, HDS can be a robust choice with lower latency. Like HLS, the HDS protocol splits media files into small chunks. HDS also provides advanced encryption and DRM features. Finally, it uses an advanced keyframe method to ensure that chunks align with one another.

2. RTMP

Macromedia developed RTMP (Real-Time Messaging Protocol) in the mid-2000s. Designed for streaming both audio and video, many know this protocol simply as Flash. Macromedia later merged with Adobe, which now develops RTMP as a semi-open standard.

For much of the past decade, RTMP was the default video streaming method on the internet. But with the recent rise of HLS, we’ve seen a decline in the usage of RTMP. Even today, most streaming video hosting services work with RTMP ingestion. (And that includes in-China video hosting, which Dacast now offers.) In other words, broadcasters deliver their streams to their chosen online video platform in RTMP stream format. Then, the OVP usually delivers those streams to viewers via HLS.

In recent years, even this legacy use of RTMP streams is beginning to fade. More and more CDNs (Content Delivery Networks) are beginning to depreciate RTMP support.

3. MSS

Next up is streaming protocol MSS (Microsoft Smooth Streaming). As the name implies, it’s Microsoft’s version of a live streaming protocol. Smooth Streaming also uses the adaptive bitrate approach, delivering the best quality available at any given time.

First introduced in 2008, MSS was one of the first adaptive bitrate methods to hit the public realm. In fact, MSS protocol helped to broadcast the 2008 Summer Olympics that year. The most widely-used MSS platform today is the Xbox One. However, MSS is one of the less popular streaming protocols available today. In almost all cases, HLS should be the default method over this lesser-used approach.

4. MPEG-DASH

Lastly, the newest entry in the streaming protocol format wars is MPEG-DASH. The DASH stands for Dynamic Adaptive Streaming (over HTTP).

MPEG-DASH comes with several advantages. First of all, it is the first international standard streaming protocol based on HTTP. This feature has helped to quicken the process of widespread adoption. For now, MPEG-DASH is a relatively new protocol and isn’t widely used across the streaming industry. However, like the rest of the industry, we expect MPEG-DASH to become the de facto standard for streaming within a couple of years.

One major advantage of MPEG-DASH is that this protocol is “codec agnostic.” Simply put, this means that the video or media files sent via MPEG-DASH can utilize a variety of encoding formats. These encoding formats include widely supported standards like H.264 (as with HLS streaming protocol), as well as next-gen video formats like HEVC/H.265 and VP10. And like HLS, MPEG-DASH is an adaptive-bitrate video method.

For a more detailed comparison, you can also review this blog post on MPEG-DASH versus HLS streaming protocols.

5. RTSP

Real-time streaming protocol, or RTSP for short, is a protocol that helps manage and control live stream content rather than actually transmitting the content. It is considered a “presentation layer protocol.”

It is a pretty old protocol, having originally been developed in the late 1990s. RTSP was developed in collaboration by Columbia University, Real Network, and Netscape.

RTSP is known for having extremely low latency, which is certainly a plus.

Unfortunately, this protocol comes with a slew of limitations. For starters, when comparing RSTP vs RTMP, it is not as compatible nor adaptable with modern video players and devices. Unlike RTMP, it is not compatible with streaming over HTTP in a web browser, nor is it easy to scale.

Advantages of HLS Video Streaming Over Other Protocols

hls video streaming
HLS video streaming has a wide range of advantages that make it attractive to broadcasters.

In the first half of this article, we covered a major advantage of HLS over other protocols in terms of streaming video quality. In particular, broadcasters can deliver streams using the adaptive bitrate process supported by HLS. That way, each viewer can receive the best quality stream for their internet connection at any given moment.

This protocol includes a number of other key benefits, including embedded closed captions, synchronized playback of multiple streams, good advertising standards support, DRM support, and more.

The takeaway here for broadcasters? For now and at least the shorter-term future, HLS is the definitive default standard for live streaming content.

Devices and Browsers that Support HLS

The HLS streaming protocol is also widely supported across multiple devices and browsers. 

Originally limited to iOS devices like iPhones, iPads and the iPod Touch, HLS is now supported by the following devices and browsers:

  • All Google Chrome browsers
  • Safari
  • Microsoft Edge
  • iOS devices
  • Android devices 
  • Linux devices
  • Microsoft devices
  • macOS platforms 

At this point, HLS is a nearly universal protocol.

When to Use HLS Streaming?

hls live streaming
HLS video streaming has many uses in the professional broadcasting world.

Currently, we recommend that broadcasters adopt the HLS streaming protocol all of the time. It is the most up-to-date and widely used protocol for media streaming. In a 2019 Video Streaming Latency Report, for example, 45% of broadcasters reported using HLS streaming.

RTMP came in second with 33% of broadcasters using that alternative. And MPEG-DASH trailed behind even further, used by only 7% of broadcasters.

1. Streaming to Mobile Devices

hls mobile streaming
Developed by Apple, hls mobile streaming has support for all portable devices, including iPhone, iPad, and other streaming media players.

HLS is mandatory for streaming to mobile devices and tablets. Given that mobile devices now make up over half of all internet traffic, HLS is essential for these users.

2. Streaming with an HTML5 Video Player

Remember, native HTML5 video doesn’t support RTMP or HDS. Therefore, if you want to use a purely HTML5 video player, HLS is the only choice.

Along with reaching mobile devices, these considerations point towards HLS as the default standard. If you’re stuck using Flash technology for the moment, RTMP will be a better delivery method—but only if you have no other option.

3. Latency with HLS Streaming

HLS streaming does have one disadvantage, which we mentioned above. Namely, it has a relatively higher latency than some other protocols. This means that HLS streams are not quite as “live” as the term live streaming suggests. 

Generally, with HLS viewers can experience delays of up to 30 seconds (or more, in some cases). That said, this isn’t a problem for most broadcasters. The vast majority of live streams can handle that kind of delay without causing any sort of user dissatisfaction.

One protocol that works well to reduce latency with HLS video streaming is Low-Latency CMAF for DASH. This protocol works with the content delivery network and HTML5 video player to carry the weight where HLS streaming is lacking.

Building an RTMP to HLS Workflow

So we’ve covered what HLS is, how it works, and when to use it. We’ve also looked at alternative streaming protocols from the past and present. Now, let’s talk through how to build an RTMP Ingest to HLS workflow.

RTMP Ingest Workflow
HLS works well with the ingestion of RTMP, which is a bit of an older protocol.

If you’re using a streaming service like Dacast, you need to build a workflow that begins as RTMP. This is much simpler than it sounds. Essentially, you simply need to configure your hardware or software encoder to deliver an RTMP stream to the Dacast servers. Most encoders default to RTMP, and quite a few only support that standard.

For Dacast users, our CDN partners then ingest the RTMP stream and automatically rebroadcast it via both HLS and RTMP. From there, viewers default to the best-supported method on their own devices.

Using HLS is relatively straightforward with a professional, full-service OVP. On Dacast, all live streams default to HLS delivery. On computers that support Flash, we do fall back on RTMP/Flash in order to reduce latency. However, HLS is supported automatically on every Dacast live stream and used on almost all devices.

As we discussed above, HLS streaming is delivered through an M3U8 file. This file is a kind of a playlist that contains references to the location of media files. On a local machine, these would consist of file paths. For live streaming on the internet, that M3U8 file would contain a URL (the one on which your stream is being delivered).

Another relevant process to quickly note is transmuxing. Transmuxing is the process that repackages content files without distorting the content itself. This allows the content to flow more easily between software via the RTMP and HLS protocols.

HTML5 Video Streaming with HLS

HTML5 Video Player
HTML5 video players are essentially the universal, all-device video player.

As mentioned above, the HLS protocol has become the go-to approach for streaming content with HTML5 video players. If you’re not familiar with HTML5 video streaming, it’s one of three main approaches to video streaming today.

With HTML5, the content-hosting website uses native HTTP to stream the media directly to viewers. Content tags (e.g., <video> tag) are included as part of the HTML code. 

As a result, the <video> tag creates a native HTML5 video player within your browser. These tags provide direction to the HTTP protocol (HLS), as to what to do with this content. HTTP displays the text, and an audio player plays audio content.

Like HLS, HTML5 is customizable for broadcasters and free for viewers. You can check out our related post on optimizing HTML5 video players with HLS to learn more.

We’ve also written extensively about the transition from Flash-based video (usually delivered via RTMP) to HTML5 video (usually delivered using HLS). Check out our “Flash is Dead” blog post for more on that subject, including why it’s important to use an HTML5 video player.

If you’re streaming over the Dacast, you’re already using a fully-compatible HTML5 video player. Content delivered via Dacast defaults to HTML5 delivery. However, it will use Flash as a backup method if HTML5 is not supported on a given device or browser. This means that even older devices will have no problem playing your content over your Dacast account.

Of course, some broadcasters may prefer to use a custom video player. Luckily, it’s quite simple to embed your HLS stream within any video player. For example, if you’re using JW Player, simply insert the M3U8 reference URL into the code for your video player. Here’s a visual example:

var playerInstance = jwplayer("myElement");
playerInstance.setup({
file: "/assets/myVideoStream.m3u8",
image: "/assets/myPoster.jpg"
});

Another note about using HLS and an HTML5 video player with Dacast is that Dacast uses the THEOplayer. THEOplayer is a universal video player that can be embedded in websites, mobile apps, and pretty much any platform that you can think of.

As we’ve mentioned before, compatibility is key when choosing video players and protocols since you want to be able to reach the greatest number of people.

The Future of Live Streaming

secure live streaming
Live streaming seems to grow faster by the minute. We can’t wait for future technical improvements in the realm of video delivery, security, privacy, and more.

Before wrapping things up, let’s recap our discussion of some of the advantages of the HLS streaming protocol. First, there is no special infrastructure required to deliver HLS content. In fact, any standard web server or CDN will function well. Additionally, firewalls are much less likely to block content using HLS.

In terms of technical functionality, HLS will play video encoded with the H.264 or HEVC/H.265 codecs. It then chops video into 10-second segments. Remember, latency for delivery tends to be in the 30-second range. However, Dacast now has a solution for low-latency HLS live streaming that reduces latency to 10 seconds or less.

The HLS protocol also includes several other built-in features. For example, HLS is an adaptive bitrate streaming protocol. This means that the client device and server dynamically detect the internet speed of the user, and then adjust video quality in response. 

Other beneficial HLS features include support for embedded closed captions, synchronized playback of multiple streams, advertising standards (i.e., VPAID and VAST), DRM, and more.

While HLS is the current gold standard for live streaming, it won’t stay that way indefinitely. We expect MPEG-DASH to become increasingly popular in the coming years. 

As that shift takes place, we’ll see other changes as well, such as the transition away from h.264 encoding to h.265/HEVC. This new compression standard provides much smaller file sizes, making 4K live-streaming a real possibility.

However, that time isn’t here yet. For now, it’s important to stick with the established standards to reach as many users as possible on their devices. In other words, HLS is the streaming protocol of the present.

Conclusion

Today, HLS is widely supported, high-quality, and robust. All streamers should be familiar with the protocol, even if you don’t understand all of the technical details. This is true for all kinds of streaming, including live broadcasting over the Dacast live streaming platform.

Our goal in this article was to introduce you to the HLS protocol for streaming media. We’ve discussed what HLS is, how it works, and when to use it, as well as how it compares to other streaming protocols out there. After reading, we hope you have a solid foundation in HLS streaming technology and its future.

You can do your first HLS live stream today with our video streaming solution. If you’re ready to try it today, you can take advantage of our free 30-day trial. No credit card required.

GET STARTED FOR FREE

And for exclusive offers and regular live streaming tips, you’re also invited to join our LinkedIn group.

Finally, do you have further questions, thoughts, or feedback about this article? We’d love to hear from you in the comments below, and we will get back to you. Thanks for reading!

17 thoughts on “What is HLS Streaming and When Should You Use It? [2020 Update]

  1. Robert Ballantyne says:

    Thanks for this very useful article!

    Our group is interested in creating live broadcasts (actually narrowcasts… we won’t have a large audience, nor could we afford too many streams). The concept of any kind of ‘casting conjures up the vision of old-fashion radio and television broadcasting. In 2017 we expect to be directing our programming to computers and mobile devices — not television sets. We’d like our content to be interactive. In this regard we’d like to include a chat field along with the broadcast.

    So, my concern is your reference to ‘latency.’ You wrote, “Latency for HLS live streams compliant with the specification tends to be in the 15-30 second range.” That means the viewer commenting in the chat field is writing about something that happened many seconds ago. And that explains why the comments in the chat are far out of sync, and therefore out of context, with the broadcast.

    I don’t understand why this has to be the case. Our international group now meets online in video conferences several times each week. We are using Fuze.com (which I think is based on the Vidyo router technology) and Zoom.us (I don’t know how it works). Yesterday we had participants in the USA, Canada, and New Zealand. The video and audio was excellent (near HD). The latency was about the equivalent of 3 or 4 film frames (almost, but not quite, lip-sync).

    While it lasted, we enjoyed using Blab until the developers decided to stop providing that service. Now we are exploring Firetalk.

    Somehow these platforms can provide streaming live programming with just a tiny amount of latency. I would much prefer to use a service such as Dacast to assemble the kind of broadcast we envision (international video interview format + display content from a computer + the possibility of interacting with a chat field) and stream the program from our own website. I am not understanding why this is difficult (or impossible) and expensive.

    • Dacast Team says:

      Thank you reading us Robert!

      Broadcasting live video on internet requires to own a huge network of servers, worldwide. That alone explains the cost of this technology. Data centers cost a lot of money, way more than anyone would imagine.

      As for the 15-30 second delay, I do not understand your point here. The delay is between what the camera is filming and what the viewers are watching. Since the viewers are commenting on what they see, there is no delay on that side. A viewer will see an image and will comment on it instantly. If there is a 20sec delay between the ingest and the viewer, there will be the same delay with the chat.

      Anyway, you can absolutely use a professional video platform such as DaCast for the live video part of your project. You can even associate it to chat clients such as Chatwing. This solution works very well and is use daily by many of our customers.

      • Dacast Customer says:

        Robert is talking about people interacting via chat to the broadcast.
        Have you ever seen those old live TV programs where people watching are calling by phone and are live with the presenter? Typically quiz shows.
        Or live contests broadcasted by the TV, and the Presenter “opens” and “closes” the voting windows on a website or a phone center.

        In our case, a broadcast of a conference, with a Q&A session: people in the auditorium is asking questions raising their hands and receiving a mike, people watching in live streaming are sending questions via chat.
        30 seconds delay is disruptive: the speaker is receiving questions about items discussed 30 seconds ago, and the answer arrives another 30 seconds later…
        Moreover, when you ask:”Any other questions?” you have to have the entire auditiorium waiting silently for 30 seconds to be sure even people watching have no more questions….. imagine 200 people silent in an auditorium….

        This is a real issue. We gave up with Ustream, even though they were free, because their delay increased with the time, reaching quite two minutes after a 5 hour conference…. Dacast HTML5 is currently remaining under 30 secs, but I wouldn’t recommend it for critical conferences. Old Dacast Flash rtmp was below 10 seconds…

        • Dacast Team says:

          You answered the problem in the way you formulated the question. RTMP streaming simply isn’t a solution for multiple-way streaming as the conversion from RTMP to HLS is taking 15-20 seconds already. Maybe using a conferencing software or a conferencing platform would be smarter as they are developed for this purpose, on a totally different type of technology.
          On24.com is a very robust choice, being used by major companies. Just that the $7K / month isn’t affordable by everyone 🙂

          • Dacast Customer says:

            I beg to disagree.
            We are not looking for an interactive webinar, but to a broadcasting with a small portion with a return channel. Even in the auditorium, from the time someone raises his/her hand and the time an assistant with the mike reaches him/her, it takes 4/5 seconds. Since the return channel (the chat) is with no delay, a 4/5 seconds delay is “live”.
            In non-critical conferences (such as, free-of-charge or among internal members ones), even 10/15 seconds is acceptable. An RTMP only streaming (such as the old dacast flash one) was “live enough”.
            The introduction of HLS for the broadcasting leg, as you said, is adding those 15-20 distruptive seconds. We are still using Dacast for our internal free-of-charge conferences with members, because its a value for money solution.
            I had a look to On24.com but is well more than we need. More towards Adobe Connect or Cisco WebEx, than a Conference with a chat. We tested livestream.com and it remained within the 5 seconds delay. Unfortunately, to have control on embedding location, so to restrict access only to members, you need a $799/month contract: 1/10 of On24.com, but 15 times a 2TB Dacast event pricing.

          • Dacast Team says:

            You got the idea! On24 quote me a $7K plan last time I reached their sales number. Livestream is based on the exact same technology as we are, but are charging a lot more because of their branding and expenses.

            One solution you should experience is to go with FLASH instead of HTML5. The delays are much shorter (few seconds only) so it would be a good fit if you are not trying to reach mobile users (as FLASH isn’t working on all smartphones and tablets).

            The way DaCast has been developed, it is unfortunately not possible to get a return channel.

          • Dacast Team says:

            Or, as I was saying on another thread, it is always possible to handle the the inputs and outputs before going through the video encoder. You could for instance realize a Skype interview, and then use the windows as a input source in your encoder. Than way, your viewer would be able to view video coming from multiple sources.

  2. Graham Lowe says:

    Thanks for the information. We are using DaCast for our live stream using Adobe FMLE and after reading this article I said I better plan an upgrade to get away from flash. I spent considerable amount of time and found a device that would fit into our environment nicely and it streams RTMP and HLS. Great, so I added this device to our budget and plans however I just went to see how easy it would be to configure HLS to use with DaCast live stream and was a bit shocked to see it’s not supported by DaCast, just RTMP. Now I am a bit confused about this article unless DaCast is rolling out HLS support soon for live streaming?

    • Dacast Team says:

      When live streaming as “HTML5” (in opposition to Flash), we ingest an RTMP signal and convert it to HLS for the delivery already! That alone explain the 20 seconds delay between the broadcast and what is shown in the player. This is mandatory for anyone willing to broadcast to smartphones and tablets.
      In short: HLS is here at DaCast for years, but happens via a RTMP ingest (HLS encoders being way too overpriced for the moment. We are still discussing an alternative here, to ingest HLS and therefore reduce a lot the delay)

  3. Gordon Barker says:

    Many thanks very informative and really appreciate your article can we get more technical around HLS streaming such as what streaming Servers can be used for streaming and what client software can be used?

  4. John Parulis says:

    Dacast excels in the field of live stream services with great tech articles like this. Good to have an understanding of HLS….though I wish the latency could be improved a bit. Thanks guys! My clients are happy with our streaming work, thanks in part to you.

Leave a Reply

Your email address will not be published. Required fields are marked *