COVID-19 and Beyond: Opportunities for Video Streaming

COVID-19 and Beyond: Opportunities for Video Streaming

I grew up off-grid in a cabin in the New Mexico mountains. That was isolation. By contrast, isolation in the time of coronavirus is incredibly connected. While working, socialising and relaxing from home have impacted that connectivity, new patterns are emerging as well as opportunities for the future.



Akamai, a leading content delivery network (CDN), saw global internet traffic increase by 30% in March. That’s an entire year’s growth in a few weeks, and without live sports streaming. Similarly, Comcast saw a 32% increase in peak USA traffic over March, with plateaus in early lockdown markets.

Even before COVID-19, Sandvine’s 2019 global internet usage report showed video was 60% of downstream internet traffic. When Conviva analysed three weeks in mid-March they discovered that video streaming viewing hours jumped more than 20% globally in that last week, up 27% in the USA. By the end of March, Comcast saw a 38% increase in streaming and web video consumption in the USA.


Although Internet service providers (ISPs) and CDNs are engineered to deal with peak changes, when usage spiked, the European Commissioner asked streamers to switch to Standard Definition (SD) when High Definition (HD) wasn’t necessary. The European Broadcasting Union (EBU) followed by issuing recommendations for adapting streaming quality during times of crisis.

Assuaging one concern, Conviva also discovered that daytime viewing jumped nearly 40% in that last week, spreading the peak load throughout the day, but that still leaves the volume of traffic – or number of bits – flowing across the internet.


Netflix and Google’s YouTube agreed to reduce bitrates in Europe for 30 days, with Netflix dropping by 25% and YouTube moving to SD as a default globally. Both were crucial, because while Netflix usually has the largest percentage of video traffic, YouTube is currently generating almost twice the traffic of Netflix. Amazon Prime Video, Apple TV+ and Walt Disney’s Disney+ soon followed.

Consumers were concerned. They were paying for HD but would get SD. Netflix explained that  customers would still get the SD, HD and Ultra-High Definition (UHD) resolutions they paid for, just no longer the highest quality from an adaptive bitrate (ABR) “bitrate ladder” of low to high bitrates and resolutions. Normally, the most suitable quality at that time for the user’s bandwidth, device and resolution purchased is selected.



Netflix’s total energy consumption for 2019 – 451,000 megawatt-hours – is enough to power 40,000 average American homes for a year, and is an 84% increase over 2018, compared to 20% user growth. That includes their offices, CDN and partnerships such as Amazon Web Services, Google Cloud and the caching servers they put into ISPs.

Netflix has 167m subs. Disney+ quickly hit 50m with one analyst predicting 226m subs by 2024. Reducing bits creates a more sustainable energy-consumption to user-growth ratio and helps companies meet their environmental impact objectives.


During the 30-days of COVID-19-inspired bitrate reduction, streamers will have saved money per view by reducing storage, distribution and energy consumption costs.

If one million people watch one hour per day, at say 1 GB of data per hour (sort of between SD and 720p HD), and it costs .0025 USD to stream that 1 GB to one person, that’s nearly $1 million per year ($912,500). At 100 million daily views, that’s $91 million per year. YouTubers watch 1 billion hours per day. That’s nearly $1 billion per year. A 25% savings on that is $228 million. Those costs increase with Full HD, UHD, more subscribers, binge-watching and those not lucky enough to own a CDN or have volume deals.


While internet usage may plateau during lockdown, it will still increase long-term. Distributors could get used to the higher margins. In addition to YouTube, Google is temporarily lowering its Nest camera default from high to medium, saving 100 GB of data per month. These short-term actions enable quick bitrate reduction, but they don’t preserve quality. Consumers may tolerate that for 30 days, but they won’t indefinitely. Luckily there’s been a lot of research into this area.



Streaming content is either captured and delivered live or produced and later distributed on-demand. A codec encodes (usually in hardware) the moving image source and decodes (usually in software) on a device to display it.

Codecs for streaming are lossy. They reduce the bitrate as much as possible while attempting to maintain fidelity to the original source. Examples include: MPEG-4 AVC (H.264), HEVC (H.265), VVC, Google VP9 and AOMedia AV1. As an older codec, only AVC is supported on all devices, but it requires the highest bitrate. Newer codecs are more efficient at reducing the bitrate while maintaining fidelity but typically require more time and power. Licencing costs vary.


There are a variety of bitrate reduction tactics. Some optimise during encoding. Per-title encoding, pioneered in 2015 by Netflix, tailored the encoded bitrate ladders per title to use fewer bits for simple content and more for complex action. To measure fidelity, Netflix used a quality metric known as PSNR (Peak Signal-To-Noise Ratio). Unfortunately, PSNR doesn’t always measure perceptual quality – i.e. how it looks to a person. Neither does SSIM (structural similarity), designed to improve on PSNR.

So, Netflix co-created VMAF (Video Multi-Method Assessment Fusion), a perceptual quality metric. These metrics and others, including commercial ones like SSIMPLUS, are used by distributors to guarantee quality. As an understanding of measure, a PSNR of 45 dB (decibels) is very good quality while 35 dB shows artefacts, and a VMAF of 93 (1-100) or a SSIM score of .95 (0-1) means it’s nearly indistinguishable from the source.


Machine learning (ML) can save encoding complexity and bitrate by training deep neural networks to perceptually improve each frame by enhancing areas that are most important to the viewer and blurring areas that aren’t. Essentially reverse engineering perceptual metrics to make encoding as effective as possible. When this is before encoding – precoding – it works with any codec, encoder and decoder. A side-effect is that this also finds the weaknesses and limitations of metrics and encourages their improvement.

There are balances to maintain. If the focus is purely on hitting 100 for VMAF, the content will look artificial. Metrics like PSNR and SSIM are necessary to maintain a natural feel, but, when tuning for human perception, these scores may drop or stay the same, while VMAF tends to go up. Also, if just metrics are used, the image looks sharper but it’s harder to encode and doesn’t save bitrate, but if just focus on bitrate and ease of encoding, it becomes blurry and fidelity is lost.


I advise iSIZE, a machine learning precoder that claims 20%-40% bitrate savings (up to 60%) without changing the resolution, and with VMAF typically staying the same or increasing. Latency is one frame, but higher frame rates and resolutions, like UHD, require higher performance GPUs and RAM to be real-time.

I asked expert reviewer Jan Ozer to independently test iSIZE’s BitSave product. He tested using the MPEG AVC (H.264) codec.

Jan confirmed that “BitSave is a legitimate processing technology and not a [VMAF] hack”. Against a straight baseline encode, BitSave showed significant improvement in VMAF, with PSNR down, SSIM about even, and frames looked like they had “improved contrast with a touch less haze” with a “separate round of color grading”.

Jan then added the VMAF hacks to the FFmpeg filters for a comparison test. While BitSave increased contrast in most files, which gives higher VMAF, just changing contrast in FFmpeg, darkened videos. iSIZE’s IBC paper confirms iSIZE modify luminance (values) not chrominance (colour). Jan then compared baseline, BitSave and the hacked FFmpeg filters. At the same data rate BitSave delivered the highest VMAF and better PSNR and SSIM than FFmpeg. At 40% bitrate reduction, the FFmpeg hack won VMAF but BitSave had a higher PSNR and SSIM, although baseline had the best.

Ultimately, “[a]fter many hours of testing, [Jan] found that BitSave’s technology is valid and valuable” though he recommends subjective testing. I agree and recommend testing at various bitrate savings and metric balances. iSIZE continually optimises its BitSave product, and tests with studios and other large industry players. It has very good potential to provide long-term sustainability and cost-cutting while enhancing the perceived customer experience.

This article was first published on LinkedIn.

Feeling inspired? Leave a comment!