How to Power your Live Streaming Projects at Scale using Akamai and Linode

How to Power your Live Streaming Projects at Scale using Akamai and Linode

Akamai has deployed over the years the most pervasive, highly distributed Edge network?with approximately?350,000 servers in more than 135 countries and over 1,400 networks around the world. Its distributed platform is the foundation for many Akamai Solutions, including Content Delivery Network for Video Streaming, Web And APP Performance, Edge Computing, and a broad set of Security Solutions.

On February 2022, Akamai announced an agreement to acquire Linode, one of the easiest-to-use and most trusted infrastructure-as-a-service (IaaS) platform providers, with the goal to become the world’s most distributed compute platform, from cloud to edge. Akamai and Linode recently communicated the plan to add more than a dozen new Linode data centers— equipped with Linode’s full product suite—across North America, APAC, LATAM, and Europe by the end of 2023.

Core, Distributed and Edge Location

Additionally, 2023 will also see Akamai and Linode introduce a new concept designed to get basic computing capabilities into difficult-to-reach locations currently underserved by traditional cloud providers. They are called “Distributed Sites”, and Akamai and Linode have identified more than 50 cities where it makes sense to place them.

After this short Introduction, let's speak about what you will read here.

If you know me even a little, you are aware that my passion has been and still is video streaming. Over the years I used many open-source and commercial tools to build streaming solutions for many clients, mainly for sports events, and mainly for live use cases. So I thought: Why not try to build some live streaming workflow using Akamai and Linode in conjunction? That POCs should be targeting Live Streaming use cases and should provide some ideas and options, like:

  • PUSH or PULL workflows: Push the live streams from Linode Cloud to Akamai CDN (to Akamai Live Origin Media Services Live 4) or the Akamai CDN to pull the streams from Linode Cloud (in this case pay attention to the origin egress traffic that can be high, we will see how to limit it)
  • Different Video Ingestion Protocols (SRT, RTMP, MPEGTS..): Yes, let's include the legacy RTMP protocol as it is still used for many workflows. In this case, we can still use the legacy encoders some companies still have
  • Open-Source (free) and Commercial (not expensive) Encoding and Packaging Sofware and Tools
  • The possibility to Transcode and Packaging or just Packaging

No alt text provided for this image

This Diagram represents 4 possible different workflows I built to Power Live Streaming Projects at Scale using Akamai and Linode. Each Workflow has some Pros and some Cons: the goal of my exercise is?to experiment with a broad set of options to optimize Edge and Cloud Infrastructures to create powerful architecture. Also, note the following workflows have not been tested by me in production environments, but most of them are very popular solutions used by many customers around the globe. Moreover, an Active - Active workflow (so with a backup) with GEO redundant Linode Instances is possible, here and here there are some additional information and ideas about how to achieve it.

No alt text provided for this image

For all the tests I used a Linode Nanode Instance hosted in Frankfurt, Germany, with Ubuntu 20.04 LTS Operating system. Nanode Instances are VMs with 1 shared CPU, 1 GB of RAM, and 25 GB of Storage (as of Oct 2022): they are the most convenient instance type in Linode. For production usage I suggest choosing a dedicated?CPU instance with the right amount of CPU needed; these types of Linodes are good for full-duty workloads where consistent performance is important like video streaming, especially when live transcoding is considered.

No alt text provided for this image

As Encoder I mostly used OBS (https://obsproject.com/) - a Free and Open-Source software for video recording and live streaming - installed on my Mac, capable of streaming in SRT and RTMP. I am located in Turin, Italy. Additionally, during the tests, I also used a Datadog Agent to monitor the Health of the VMs, including CPU, Memory, and Network IN/OUT stats.

Finally, in this article I will use many times some terms that are referred to some popular streaming protocols, If you would like to explore them better see the links here below:


Workflow 1: NGINX Open Source with RTMP Module

NGINX is a very popular free, open-source, high-performance HTTP server and reverse proxy. I used also the NGINX RTMP module, a simple additional package that enables your web server to receive RTMP streams from your encoder (in my case OBS) and package them (without Transcoding) into HLS videos to be played by modern browsers and devices.

No alt text provided for this image

For this workflow, the Akamai CDN is set up to pull the HLS content from the Linode NGNIX Origin with HTTP protocol and serve it in HTTPS to the clients (AMD Protocol Downgrade feature). Since Akamai CDN pulls the streams from Linode, if you have a lot of users you may see an elevated origin egress traffic. To reduce and better control this traffic, you can optionally use Cloud Wrapper: a highly efficient custom caching layer that wraps around centralized cloud providers, shielding them to improve origin offload.

I used HLS.js player (https://hls-js.netlify.app/demo) and the Native iOS player to play the stream, playback has been smooth without interruption. Finally, the load on the Linode instance has been really low: the CPU never went above 3% with 1 RTMP in and two HLS playback: very good!

No alt text provided for this image

I also found on Youtube this tutorial about how to install build tools and dependencies, configure HLS or DASH, and set the right playback method.


Workflow 2: FFMPEG

FFMPEG is a very popular free, open-source, and complete solution to record, convert and stream audio and video. There is no video developer who has never used FFMPEG in their life, and moreover FFMEG Software powers many commercial solutions.

No alt text provided for this image

FFMPEG supports many protocols in input (I used SRT and MPEG-TS over UDP for my tests) and can be used for packaging only (SRT to HLS for example) and also for transcoding and packaging at the same time. I used FFMPEG to push content to Akamai Live Origin Media Service Live 4 (MSL4), to push HLS, DASH, or CMAF streams to MSL4, check the compatibility of your encoder here.?

I tested the following two workflows:

  • Encoder with FFMPEG on a "first" Linode Instance, encoding and streaming via UDP a MPEG-TS feed to a second Linode Instance in the same VLAN of the first one (https://www.linode.com/docs/products/networking/vlans/). The "second" Linode was using FFMPEG to read the MPEG-TS, transcode it in two resolutions (640x360 and 480x270), and package it in HLS pushing it to Akamai MSL4. Then, a classic Akamai CDN setup has been implemented, using MSL4 as the Origin.
  • Same Workflow as before, but instead of using MPEG-TS, I used an SRT stream from OBS installed on my MAC, pushed to the same "second" Linode instance in Frankfurt

No alt text provided for this image

To create the FFMPEG Command, I used the FFMPEG command line builder (https://moctodemo.akamaized.net/tools/ffbuilder/), developed by Peter Chave . Here below you can find:

  • the command used in the "first" encoder producing a MPEG-TS stream via UDP reading a local video file. Instead of the?XXXXXX:1234 put your destination Server IP Address and Port, in my case I put the "second" Linode instance IP

ffmpeg -re -i big_buck_bunny_720p_h264.mov -s 640x360 -map 0:v -c:v libx264 -b:v 2M -map 0:a -c:a aac -map 0:a -strict -2 -c:a aac -f mpegts udp://x.x.x.x:1234?pkt_size=1316        

  • the command I used for the MPEG-TS input listening on the local host, producing 2x transcoded feeds pushed to Akamai MSL4. At the end of the command, instead of the XXXXXX (present 2 times in 2 lines), put your stream ID.

#!/bin/bas
timestamp="$(date +%s)"

ffmpeg \

\

-err_detect ignore_err \

-re -i 'udp://127.0.0.1:1234?fifo_size=1000000&overrun_nonfatal=1' \

\

-flags +global_header -r 24 \

\

-filter_complex "split=2[s0][s1];\

[s0]scale=640x360[s0];\

[s1]scale=480x270[s1]" \

\

-pix_fmt yuv420p \

-c:v libx264 \

\

-b:v:0 730K -maxrate:v:0 730K -bufsize:v:0 730K/2 \

-b:v:1 300K -maxrate:v:1 300K -bufsize:v:1 300K/2 \

\

-g:v 24 -keyint_min:v 24 -sc_threshold:v 0 \

\

-color_primaries bt709 -color_trc bt709 -colorspace bt709 \

\

-c:a aac -ar 48000 -b:a 128k \

\

-map [s0] -map [s1] \

-map 0:a:0 -map 0:a:0 \

\

-preset veryfast \

-tune zerolatency \

\

-hls_init_time 2.000 \

-hls_time 2.000 \

-hls_list_size 20 \

-hls_flags delete_segments \

-hls_base_url $timestamp/ \

-var_stream_map 'v:0,a:0 v:1,a:1' \

-hls_segment_filename 'https://p-epXXXXXXX.i.akamaientrypoint.net/XXXXXXX/test/'$timestamp/stream%v_%05d.ts \

-master_pl_name master.m3u8 \

-http_user_agent Akamai_Broadcaster_v1.0 \

-http_persistent 1 \

-f hls \

https://p-epXXXXXXX.i.akamaientrypoint.net/XXXXXXX/test/level_%v.m3u8h        


No alt text provided for this image

The "second" Linode instance of this test has been of course more loaded if compared to the Instance of the first NGINX test since we were transcoding 2x (low resolution) feeds.


Workflow 3: Ant Media

Ant Media is a commercial solution that includes a streaming engine software that provides ultra-low latency streaming using WebRTC and transcoding and packing to HLS stream. Ant Media is present in the Linode Marketplace: deploying it is very easy and super fast.

No alt text provided for this image

Like the NGINX workflow, I set up the Akamai CDN to pull content from the Linode instance where AntMedia was installed. This time I installed an HTTPS (Let's Encrypt) certificate on the Ant Media Machine, using this simple procedure. The full workflow was in HTTPS.

No alt text provided for this image

From my OBS Encoder, I pushed an SRT stream (I also tried successfully with RTMP) to the Linode Ant Media server. Then I was playing back the transmuxed stream from HLS.js and iPhone using Akamai CDN, and I was also playing back a low latency WebRTC stream directly from Linode.

No alt text provided for this image

As you can imagine looking at the CPU usage, I was initially transcoding the input to an 854x480 and pushing to MSL4 both the input (1280x720) and the transcoded feed (I'm not sure this setup would work well in a production environment and the switch of bitrates can be difficult given the fact the feeds are coming from two different encoding systems).

No alt text provided for this image

Then at 00:20, I stopped the transcoding and the CPU went from 40% to 15%, while I was still pushing to Akamai MSL4 the original input feed. At 00:24 I started playing the WebRTC feed. If compared to the two first workflows, the memory usage was quite close to 1 GB, and you will also notice that the Java Process of Ant Media is taking almost 40% of the Memory Usage.


Workflow 4: WOWZA Streaming Engine

Wowza Streaming Engine is a popular commercial streaming server software for video streaming. As FFMPEG, Wowza Streaming Engine is compatible with MSL4, and it can push HLS and DASH live video streams to Akamai.

No alt text provided for this image

You can find a simple guide to making Wowza work with Akamai here.

No alt text provided for this image

I used my Mac to stream an SRT and RTMP live feed to Wowza Streaming Engine, which was transcoding the 1280x720input to a 640x360. Then I used the Stream Target to push both streams to Akamai MLS4, where Akamai CDN (AMD) was set up to pull the content. As for the Ant Media workflow, also here I'm not sure this setup would work well in a production environment because the feeds are coming from two different encoding systems, it's better to transcode with the same system all the feeds you are pushing to Akamai (to have a multi-bitrate HLS and/or DASH stream).

No alt text provided for this image

The Linode instance was very busy this time: at 15:44 I enabled the transcoding and the CPU went quickly to 75%. The memory was completely full, with the Java process of Wowza taking most of the memory usage. For this workflow, a 1 CPU Nanode instance was clearly not enough.


Bonus Workflow: Shaka Packager

Shaka Packager is an open-source tool and a media packaging SDK for DASH and HLS packaging and encryption. It can transmux input media files from one container to another container. I installed Shaka Packager on a Docker Linode Instance using the Linode Marketplace (Docker Image). With this simple Tutorial it has been very simple to pull the latest Shaka Package and run the container:

If you want to use the Shaka Packager on a Web Server, don't forget to:

  • Install Apache
  • Enable the Cors
  • Install a valid HTTPS certificate

For the last step of installing an HTTPS certificate, I found it very simple and fast to use the Linode CertBot functionality: Certbot is a tool that automates the process of getting a signed certificate via Let’s Encrypt to use with TLS. For most operating system and web server configurations, Certbot creates signed certificates, manages the webserver to accept secure connections, and can automatically renew certificates it has created. In most cases, Certbot can seamlessly enable HTTPS without causing server downtime.

Finally, I used these commands to transmux an MP4 VOD file to a DASH stream. Simple and Effective!

docker run -v /var/www/html/:/var/www/html/ -it --rm google/shaka-packager        

and then:

packager input=/var/www/html/movie.mp4,stream=video,output=/var/www/html/video.mp4 \ --mpd_output /var/www/html/example.mpd        

Conclusions

It is an exciting time to work with Akamai and Linode to build strong solutions to power and protect life online! In my article, I explored four (plus one) different ideas and ways to build streaming workflows using the #Linode #Cloud and the #Akamai #Edge Architecture.

Now, with Linode, Akamai is expanding from delivering and securing applications to empowering developers to build on Akamai. With Linode, we’re taking the next major step in our evolution: marrying Linode’s experience in cloud computing with Akamai’s leadership in scale and security to create the world’s most distributed compute platform — making it easier for developers and businesses to build, run, and secure their applications.

Impressing article Luca, thanks!

Dominic Lovell

Cloud & Cybersecurity | Field CTO | Technology Evangelist | People Leader

2 年
Anders N?sman

Solving complex media delivery, security and compute problems for the most advanced media companies in the world.

2 年

Great stuff, we are also running a few other packaging solutions on Linode, like ScalStrm and Unified streaming.

要查看或添加评论,请登录

Luca Moglia的更多文章

社区洞察

其他会员也浏览了