Slack Video Integration for Live Notifications

Posted on by

Slack Video Integration for Live NotficationsUsing Slack for its collaborative capabilities? Looking for ways to bolster that effort through autonomous notifications of your streaming projects? Ustream is introducing a new Slack video integration that surrounding Ustream video channels. The integration allows broadcasters to link to a Slack channel to push automatic notifications, and works with both Ustream Align for internal communication and Pro Broadcasting. It creates an easy way for team members and followers to get informed about the latest live stream or video on demand content.

What is Slack?

Slack functions as a powerful and engaging tool to facilitate communication, largely for internal use cases. In 2014, a year after the “app” launched, engagement was tracked on an average of 10 hours per user. The service helps to keep people connected through being available on desktops and mobile devices, with apps available for Android, iOS, and even a beta version for Windows phones.

Slack Notification Use Cases

The Slack integration feature for Ustream can be enabled on “public” channels over the messaging platform. This feature is one of the few Connection options that works with Ustream Align. The use cases for this implementation are numerous, for example a general company channel can be linked in order to notify the team of important streams like a CEO town hall meeting. The notifications can also inform channel members of training sessions beginning or when training resources are updated as well.

For event use, a marketing team can be quickly notified when a broadcast for an event is live. The notification can be used as a sync point to notify others to begin increased social media efforts. It can also supplement webinars and notify a team to key up internal resources if the stream is expected to bring in live enquiries.

For public use, there are many communities open that allow for relevant conversation around a topic. For example, looking to live stream a construction site for a new skyscraper? Chances are a Slack community exists around that interest and can be easily notified and engaged whenever the broadcast goes live. Although Slack is generally associated with internal use, public facing channels do thrive with passionate participants. There are a lot of directories available to catalogue the many public channels available.

Setting Up Slack Integration

Setting up Slack video integration inside UstreamLinking a Ustream channel with a Slack channel is a quick and easy process.

While logged into a team on Slack, a broadcaster needs to login to their Ustream account and go to the Connections tab under Account. One of the connection options on this tab will be for Slack. Clicking the connect button will redirect to an authorization page, where authorize can be clicked to redirect back to the Connections tab.

The Slack team account has now been integrated with the Ustream account. The integration can be disabled at any time by clicking the Disconnect button.

Check our Slack Integration Set Up Guide for a more detailed explanation of enabling Slack integration, with images for each step of the process.

Enabling Slack Notifications

Enabling Slack notifications for the Ustream video platformAfter being setup, a new tab will be added to Ustream channels on the account called Slack Notifications. Selecting this will allow a broadcaster to enable the feature and also designate what Slack channel the notifications will be sent to. Note that the dropdown for Slack channels will include all Slack channels connected to your team, regardless of if the individual who set up the Slack integration is apart of that channel or not. “Private” channels are not available on the list.

Once enabled, the Slack integration will offer notifications for participants in the Slack channel. These notifications will be whenever a Ustream channel goes live or new video content is added. In the instance of either of these events, a message will be pushed to the corresponding Slack channel.  For a stream going live, this message will appear like the following:

Ustream BOT
Live now

[Channel title]
[Channel description]

Both of the elements in brackets are controllably by the broadcaster. The “Channel Title” can be set in the Info tab. The “Channel Description” can be set as as part of the Channel Page tab, under the About settings. The description will truncate if the description is longer than 140 characters, ending in an ellipses (…).

The channel title will act as a link, leading back to the Ustream channel page. If the Ustream channel page is disabled, the link will be removed although the title name will still appear.

For videos content being added to a Ustream channel, a similar but different message will appear on the Slack channel to notify users.

Ustream BOT
New video on [Channel Title]

[Video title]
[Video description]

The bracket information can be customized by the Ustream broadcaster. This is controlled through going to the Videos tab for the channel and editing the video. The edit panel will allow the video title and description to be edited.

A single Slack channel can be linked to a Ustream channel at a time. However, a broadcaster can quickly change their Slack channel in the Connection tab at any time.

Note: The message to inform a Slack channel that a stream is going live is not linked to the Event feature, where a broadcaster or company can set a specific date and time that an event will begin. This message will instead publish whenever live video content is published from the encoder to that channel. As a result, this makes it faster to setup a broadcast, but users should be aware in regards to test streams and will want to disable the Slack notification when doing a test live stream for this reason.

Slack Integration Feature History and Launch

The Slack Integration feature was actually born as a December 2015 hackathon idea.

The feature is launching today on April 7th, 2016. It will be available on all plan levels at Ustream. This includes Align, being the second Connection feature there after YouTube.

Want to learn more about Ustream Align and how this Slack video integration can help bolster your internal communication?

Contact Us Now

Interlaced Video & Deinterlacing for Streaming

Posted on by

Interlaced Video and deinterlacing with encodersHave you ever seen video content that looks like the image to the right, but weren’t sure of the cause? These overt horizontal lines, appearing as pixelation around movement like out of an old school Atari game, are an artifact created from presenting an interlaced source in a progressive format.

This article explains what is interlaced video content and what sources, such as analogue cameras, can produce this type of video content on live streams. It then goes over deinterlacing techniques to remove this artifact and how to easily enable it on the encoder side… and why you wouldn’t want to use deinterlacing on content that is already progressive.

What Is Interlaced Video?

Interlaced video is a technique that was originally created and made popular before the advent of digital televised content. First developed over 70 years ago, it was primarily for television video formats like NTSC and PAL.

At its root, interlacing was an early form of video compression that was used to make the video look smoother while sending less data. This was achieved through breaking up each full frame of video into alternating lines taken from two separate fields that were captured at slightly different times. After this process, one set of lines would be delivered to the viewer before 1/60th of a second later the second set of lines would be sent.

In contrast to other possible methods of the time, this process granted what appeared to be smooth movement, at least to the human eye, while being able to send less data related to the broadcast. Interlacing can cause issues, though, trying to deliver that feed to a progressive source due to the differences in presentation between the two.


Progressive Video And How It Differs From Interlaced Video

Unlike interlaced content, progressive video is a video track that consists of complete frames. There is a slight asterix to this statement as techniques like interframe can be used to compress video content to remove redundancies from frame to frame (read more about the interframe process). Even including this technique, progressive video content will not alternate fields and will present a full keyframe that you will never find in interlaced content. This means it won’t serve odd or even lines at different time intervals from each other.

Consumers will be familiar with this terminology due to its proliferation in HD content. For example, 1080p content means it has a vertical resolution of 1080 lines while the “p” relates that this is progressive content.


Which Method Is Better: Progressive Or Interlacing?

To be blunt, the answer is it actually doesn’t matter which is better. Many playback methods, like computer monitors or modern HD TVs, do not support interlacing. So even if interlacing provided better looking content, a broadcaster would still want to go with progressive delivery due to support for this method. Otherwise, the broadcaster would be displaying interlaced video in a progressive format.

Assuming both methods were supported equally, the human eye can’t keep up and the motion should look smooth regardless.


What It Looks Like: Interlaced Content As Progressive Video

Sometimes a broadcaster needs to use an interlaced source for streaming. In other words, taking an interlaced source and make it progressive or watching it in a progressive medium, like a computer monitor. This need can range from wanting to use an older broadcast to using an analogue camera that supports interlacing.

Converting the video involves combining the two fields, that were created as part of the interlacing process, into a single frame. By default, this process creates a rather ugly artifact on high motion in the video track. The motion between fields can cause visible tearing when displayed as progressive video. Essentially, the video track shows two different line fields where the fast motion is occurring, creating a staggered line appearance as seen in the image below on the figure to the left.


Left: Interlaced video shown in a progressive format. Right: Deinterlaced video (more on this later).


How To Tell If Your Camera Captures Interlaced Video

A lot of this article has talked about interlacing as a legacy component, but that doesn’t give a fair representation. A lot of analogue cameras, for example, are setup to deliver video in an interlaced manner. Even some modern digital cameras still offer interlaced mode. Reasoning behind this is partially for compatibility and also 1080, even 1080i, is a strong selling point and it’s cheaper to do 1080i. Because of that, even though interlacing might be sometimes associated with older, televised broadcasts, it’s still very possible to use an analogue camera with a capture card or another setup and still run into interlacing.

One method to tell if your camera was setup for interlaced content or not is in the specs. While some will be overt, describing that the camera outputs in interlaced mode, others will state it in their mentioned resolution. For example, we already discussed that 1080p is an HD feed that is progressive. If that stated 1080i, though, it would mean it’s HD, interlaced content. Chances are good that someone has seen 1080p content much more frequently than the interlaced version. Most modern analogue cameras, if they are interlaced, should mention it either directly or with the resolution. If it’s an older analogue camera, from before 2003, it outputs interlaced content as the first consumer-affordable progressive camera was the Panasonic AG-DVX100 that was released in 2002.


What Is Deinterlacing Video: When You Have To Use Interlaced Sources

Thankfully, there is a process called deinterlacing which can solve issues created from presenting interlaced content in a progressive medium. Deinterlacing uses every other line from one field and interpolates new in-between lines without tearing, applying an algorithm to minimize the resulting artifacts.


How To Deinterlace Video For Live Streaming

Deinterlacing is done at the encoder level for live content. How this is done varies from encoder to encoder, with some enabling it through a simple check box.

Deinterlacing video content at the encoder levelFor Ustream Producer, deinterlacing is found under source settings, via Sources and then “Show Source Settings…”. If a source is being used that can be deinterlaced, a checkbox will appear to enable it for that source.

Adobe Flash Media Live Encoder (FMLE) users can find the deinterlace option on the main Encoding Options panel. Simply called “Deinterlace”, this feature is found to the left of Timecode at the bottom of the available options.

Teradek encoder products, such as the Cube and VidiU, offer built-in hardware based deinterlacing. Inside the interface for the encoder, this feature is found under Encoder Settings. Located above Adaptive Framerate, this feature is called simply Deinterlacer and can be enabled or disabled.

Show Source Settings to Deinterlace in WirecastOn Wirecast, this is found under Sources and then “Show Source Settings…”. From this screen you can select your source, with most having two options available. For example, a capture card source might show “Capture Device Size” and “Device Deinterlacing”. Changing this from “None” to “Blend” will activate deinterlacing.

If someone is using an older version of Wirecast, this option is instead located under File > Preferences > Advanced instead.

How to deinterlace in vMixFor vMix, the user has to click Add Input in the left corner to open the input selection panel. The options present will depend on the type of source selected. If selecting a source like a camera, an option called Interlaced should be present, located below Frame Rate. Unlike other encoders, to deinterlace content this option needs to be unchecked.


Another Source Of Interlaced Video: Three-two Pull Down

Sometimes referred to as a 2:3 pulldown, three-two pull down is a process used to convert material from film to an interlaced NTSC display rate. This involves taking content created at 24 frames per second and converting it to 29.97 frames per second, which is the signal frame rate of NTSC video. This process involves duplicating fields, two from one frame and then three from the next frame or the process can also be vice-a-versa. Consequently, it’s common for this to be called 3:2 pulldown or 2:3 pulldown as well, with the numbers used interchangeably to describe the effect.


Reverse Telecine: Removing the 3:2 Pull Down

Also known as inverse telecine (IVTC), reverse telecine is a process that can be used to remove the effects of taking a source and stretching it from 24 frames per second to 29.97 frames per second.  This involves removing the added information from the frames to return it to the 24 frames per second.

For example, frame 1 might be converted into frame 1A and frame 1B through interlacing, with each being a vertical odd or even sequence that is interlaced. However, frame 2 might be converted into frame 2A, frame 2B and frame 2C, with the last one being duplicated content that is used to gradually increase the frame rate. As part of reverse telecine, this added content would be removed to restore the video to its original frame rate.

If you want to live broadcast content that previously had a 3:2 pull down applied to it, it’s recommended to encode it with a reverse telecine process ahead of time before the broadcast. Apple Compressor and Handbrake, the latter calls this process “detelecine”, are two examples of programs that can be used to achieve this.


Can Deinterlacing Video Be Bad?

Yes, if the source is not interlaced than the result can introduce needless artifacting if the deinterlacing methods are inadequate. This will be most noticeable on motion, which will have a greater loss of quality. Fine, rounded details can also suffer, often converting a smooth look into a blocky look, like mini stairs as would be common in video games with pixels present and trying to create curves. If the type of deinterlacing being provided is blended, it can show obvious motion in the same frame.

In addition, deinterlacing is more CPU intensive. So an encoder using deinterlacing will require to be on a better unit compared to a similar encoder not using deinterlacing. So from a reliability standpoint, it’s better to not use the feature too.

So if a source is not interlaced, do not provide deinterlacing to it. If someone isn’t sure if a source is interlaced or not, do a quick test broadcast without deinterlacing. After some sort of motion occurs in the feed it should be easy to tell if the source needs to be deinterlaced or not.

If someone is dealing with mixed content, where part of the video is interlaced and other elements are not, it’s up for debate if the entire feed should be deinterlaced or not. Interlaced content displayed in a progressive manner is much more disruptive to the viewing experience compared to artifacts introduced from inadequate deinterlacing on already progressive content. For this reason, I personally recommend to deinterlace when dealing with mixed content. School of thought there can go both ways, though. For example, if the amount of interlaced content is minimal, like briefly showing an older TV playing interlaced content, a broadcaster can get away without using it.


Summary: Know Interlacing And How To Correct It

Many modern broadcasters will never experience interlaced content when it comes to their own broadcasting. For example, someone using just a webcam and a software based encoder will never have to worry about this. As setups become more complex, bringing in either professional analogue cameras or legacy equipment/sources (VHS tapes, etc), interlacing might come up and it’s best to know the quick techniques that can be used by your encoder to prevent it.

Interlace video example before deinterlacing

Keyframes, InterFrame & Video Compression

Posted on by

The default mental image of video compression involves unwanted video artifacts, like pixelation and blockiness in the image. This sells short, though, the complexity that actually goes into compressing video content. In particular, it overlooks a fascinating process called interframe, which involves keyframes and delta frames to intelligently compress content in a manner that is intended to go unnoticed.

This article describes this process in detail, while also giving best practices and ideal encoder settings that you can apply to your live streaming at Ustream.

Understanding Video Frames

There are a lot of terms and aspects of streaming technology that can be taken for granted. As someone matures as a broadcaster, it pays to understand elements in greater detail to learn why a process is done and also optimal settings.

For example, a keyframe is something a few broadcasters have seen mentioned before, or saw the setting in an encoder like Wirecast, without quite realizing what it is and how beneficial this process is for streaming. A keyframe is an important element, but really only part of a longer process that helps to reduce the bandwidth required for video. To understand this relation, one first needs to understand video frames.

Starting at a high level, most probably realize that video content is made up of a series of frames. Usually denoted as FPS (frames per second), each frame is a still image that when played in sequence creates a moving picture. So content created that uses a FPS of 30 means there are 30 “still images” that will play for every second of video.

An Opportunity To Compress: InterFrame

On an average video, if someone were to take 90 consecutive frames and spread them out they will see a lot elements that are pretty much identical. For example, if someone is talking while standing next to a motionless plant it’s unlikely that information related to that plant will change. As a result, that’s a lot of wasted bandwidth used just to convey that something hasn’t changed.

Still frame for keyframeConsequently, when looking for effective ways to compress video content, frame management became one of the cornerstone principles. So if that plant in the example is not going to change, why not just keep using the same elements in some of the subsequent frames to reduce space?

This realization gave birth to the idea of interframe prediction. This is a video compression technique that divides frames into macroblocks and then looks for redundancies between blocks. This process works through using keyframes, also known as an i-frame or Intra frame, and delta frames, which only store changes in the image to reduce redundant information. These collection of frames are often referred to by the rather non-technical sounding name of a “group of pictures”, abbreviated as GOP. A video codec, used for encoding or decoding a digital data stream, all have some form of interframe management. H.264, MPEG-2 and MPEG-4 all use a three frame approach that includes: keyframes, p-frames, and b-frames.

What Is A Keyframe?

The keyframe (i-frame) is the full frame of the image in a video. Subsequent frames, the delta frames, only contain the information that has changed. Keyframes will appear multiple times within a stream, depending on how it was created or how it’s being streamed.

If someone were to Google “keyframe”, they are likely to find some results related to animation and video editing. In this instance, we are using the word keyframe in how it relates to video compression and its relationship to delta frames.Keyframe and a P or B frame example

How Do P-frames Work?

Also know as predictive frames or predicted frames, the p-frame follows another frame and only contain part of the image in a video. It is classified as a delta frame for this reason. P-frames look backwards to a previous p-frame or keyframe (i-frame) for redundancies. The amount of image presented in the p-frame depends on the amount of new information contained between frames.

For example, someone talking to the camera in front of a static background will likely only contain information related to their movement. However, someone running across a field as the camera pans will have a great deal more information with each p-frame to match both their movement and the changing background.

What Are B-frames And How Do They Differ From P-frames?

Also known as bi-directional predicted frames, the b-frames follow another frame and only contain part of the image in a video. The amount of image contained in the b-frame depends on the amount of new information between frames.

Unlike p-frames, b-frames can look backward and forward to a previous or later p-frame or keyframe (i-frame) for redundancies. This makes b-frames more efficient as opposed to p-frames, as they are more likely to find redundancies. However, b-frames are not used when the encoding profile is set to baseline inside the encoder. This means the encoder has to be set at an encoding profile above baseline, such as “main” or “high”.

How Do You Set A Keyframe?

In regards to video compression for live streaming, a keyframe is set inside the encoder. This is configured by an option sometimes called a “keyframe interval” inside the encoder.

The keyframe interval controls how often a keyframe (i-frame) is created in the video. The higher the keyframe interval, generally the more compression that is being applied to the content, although that doesn’t mean a noticeable reduction in quality. For an example of how keyframe intervals work, if your interval is set to every 2 seconds, and your frame rate is 30 frames per second, this would mean roughly every 60 frames a keyframe is produced.

The term “keyframe interval” is not universal and most encoders have their own term for this. Adobe Flash Media Live Encoder (FMLE) and vMix, for example, uses the term “keyframe frequency” to describe this process. Other programs and services might call the interval the “GOP size” or “GOP length”, going back to the “Group of Pictures” abbreviation.

Choosing A Keyframe Interval At The Encoder Level

In terms of setting a keyframe interval, it varies from encoder to encoder.

For FMLE, this option, denoted as “Keyframe Frequency”, is found in the software encoder by clicking the wrench icon to the right of format.

In Wirecast, this is set from the Encoder Presets menu and the option is called “key frame every”. Wirecast is different as the interval is actually denoted in frames. So for a 30 FPS broadcast, setting the “key frame every” 60 frames would roughly give a keyframe interval of 2 seconds, as you have 30 frames every second.

For the vMix encoder, one needs to first click the gear icon near streaming, which opens the Streaming Settings. Near the quality option here is another gear icon and clicking this will open up a menu that has the ability to modify the “Keyframe Frequency”.

How to setup keyframe interval in OBS

Setting the keyframe interval in version v0.542b of Open Broadcast Software (OBS)

In Open Broadcast Software (OBS), for versions after v0.55b, the keyframe interval can be set in the Settings area under Advanced. For versions of OBS before v0.542b, it’s not very clear how to modify the keyframe interval, but this is actually a component of Settings. Once there, go to Advanced and then select “Custom x264 Encoder Settings”. In this field one needs to enter in the following string: “keyint=XX” with the XX being the number of frames until a keyframe is triggered. Like Wirecast, if a keyframe interval of 2 seconds is desired and the FPS is 30 seconds enter the following: “keyint=60”.

For XSplit, keyframe interval is a component of the channel properties. Under the Video Encoding area, one will find a listing that says “Keyframe Interval (secs)”. To the far right of this is a gear icon. Clicking the gear will launch a “Video Encoding Setup” popup. This will allow someone to specify the keyframe interval in seconds.

Relationship Between Keyframes And Bitrates

Mileage in this explanation might vary, as encoders do manage bitrates and keyframes differently. Using an encoder like Wirecast, one might notice that broadcasting someone talking against a still background has “higher quality” compared to broadcasting someone jumping up and down against a moving background. This can be reproduced when using the same exact average bitrate and keyframe interval between them. The reason for this is because, in part, due to the fact that the delta frames have a ton of information to share in the jumping example. There is very little redundancy, meaning a lot more data that needs to be conveyed on each delta frame.

If you have an encoder like Wirecast, though, it’s trying its hardest to keep the stream around that average bitrate that was selected. Consequently, the added bandwidth that is needed for the additional information contained in the delta frames results in the quality being reduced to try and keep the average bitrate around the same level.

What’s The Best Setting For A Keyframe Interval?

There has never been an industry standard, although 10 seconds is often mentioned as a good keyframe interval, even though that’s no longer suggested for streaming. The reason it was suggested is because, for a standard 29.97 FPS file, the resulting content is responsive enough to support easy navigation from a preview slider. To explain more, a player can not start playback on a p-frame or b-frame. So using the 10 second example, if someone tried to navigate to a point that was 5 seconds into feed it would actually shift 5 seconds back to the nearest keyframe and begin playback. This was considered a good trade off for smaller bandwidth consumption, although for reference DVDs elected to use something much smaller than 10 seconds.

However, for live streaming, the recommended level has drastically dropped. The reason for this is the advent of adaptive bitrate streaming. For those unfamiliar with adaptive streaming, this technology enables a video player to dynamically change between available resolutions and/or bitates based upon the viewer trying to watch. So someone with a slower download speed will be given a lower bitrate version, if available. Other criteria, like playback window size, will also impact what bitrate is given.

Player displaying a keyframeTrue adaptive streaming doesn’t just make this check when the video content initially loads, though, but can also alter the bitrate based on changes on the viewer’s side. For example, if a viewer was to move out of range of a Wi-Fi network on their mobile, they will start using their normal cellular service which is liable to result in a slower download speed. As a result, the viewer might be trying to watch content that is too high of a bitrate versus their download speed. The adaptive streaming technology should realize this discrepancy and make the switch to a different bitrate.

The keyframe interval comes into action here as making that switch occurs during the next keyframe. So if someone is broadcasting with a 10 second interval, that means it could take up to 10 seconds before the bitrate and resolution might change. That length of time means the content might buffer on the viewer’s side before the change occurs, something that could lead to viewer abandonment.

Because of this, it’s recommended to have your keyframe interval set at 2 seconds for live streaming. This produces a result where the video track can effectively change bitrates often before the user might experience buffering due to a degradation in their download speed.

What’s An IDR-Frame?

We are looping at this point, but it pays to understand p-frames, b-frames and get a crash course in adaptive streaming before talking about what is an IDR-frame, or Instantaneous Decode Refresh frame. These are actually keyframes and each keyframe can either be IDR based or non-IDR based. The difference between the two is that the IDR based keyframe works as a hardstop. An IDR-frame prevents p-frames and b-frames from referencing frames that occurred before the IDR-frame. A non-IDR keyframe will allow those frames to look further back for redundancies.

On paper, a non-IDR keyframe sounds ideal: it can greatly reduce file size by being allowed to look at a much larger sample of frames for redundancies. Unfortunately, a lot of issues arise with navigation and the feature does not play nicely with adaptive streaming. For navigation, let’s say someone starts watching 6 minutes into a stream. That’s going to cause issues as the p-frames and b-frames might be referencing information that was never actually accessed by the viewer. For adaptive streaming, a similar issue can arise if the bitrate and resolution are changed. This is because the new selection might reference data that the viewer watched at a different quality setting and is no longer parallel. For these reasons, it’s always recommended to make keyframes IDR based.

Generally, encoders will either provide the option to turn on or off IDR based keyframes or won’t give the option at all. For those encoders that do not give the option, it’s almost assured to be because the encoder is setup to only use IDR-frames.

Should Someone Use An “Auto” Keyframe Setting?

In short: no.

Auto keyframe settings, in principal, are pretty great. They will manually force a keyframe during a scene change. For example, if you switch from a PowerPoint slide to an image of someone talking in front of a camera that would force a new keyframe. That’s desirable as the delta frames would not have much to work with, unable to find redundancies between the PowerPoint slide and the image from the camera.

Unfortunately, this process does not work with some adaptive streaming technologies, most notably HLS. The HLS process requires the keyframes to be predictable and in sync. Using an “auto” setting will create variable intervals between keyframes. For example, the time between keyframes might be 7 seconds and then later it might be 2 seconds if a scene change occurs quickly.

Setting the Keyframe interval in OBS

Setting a whole number in OBS v0.55b to disable auto switching

For most encoders, to disable “auto change” or “scene change detect” features this often just means denoting a keyframe interval. For example, in OBS if a keyframe interval is set at 0 seconds then the auto feature will kick in. Placing any number in there, like 1 or 2, will disable the auto feature.

If the encoder, like Wirecast, has an option for “keyframe alignment”, it should be known that this is not the same process. Having keyframes aligned is a process for creating specific timestamps and is best suited for keeping multiple bitrates that the broadcaster is sending through the encoder in sync.

Perfecting A Keyframe Strategy

With the advent of adaptive bitrates, the industry is at an odd juncture where there is a pretty clear answer on best practices for keyframes and live streaming. That strategy includes:

  • Setting a keyframe interval at around 2 seconds
  • Disabling any “auto” keyframe features
  • Utilizing IDR based keyframes
  • Using an encoding profile higher than baseline to allow for b-frames

This strategy allows for easy navigation of content, for on demand viewing after a broadcast, while still reaping the benefits of frame management and saving bandwidth on reducing redundancies. It also supports adaptive btirate streaming, an important element of a successful live broadcast and being able to support viewers with slower connections.

Please Contact Sales for more questions on interframe and how Ustream can help you deliver high quality video alongside lower bitrate options through cloud transcoding.


Disclaimer: This article is aimed at helping out live broadcasters or at least those who plan for a healthy video on demand strategy over streaming. The answer to many of these questions would of course be different depending on playback method. For example, for the intention of creating video content that might be played via a video file, the “scene change” option is just one example of something that would be ideal. Some of these techniques only becomes undesirable in relation to streaming when using adaptive technology.

The History of Ustream at NAB

Posted on by

ustream at nab

From April 16th through the 21st, the Las Vegas Convention Center will be taken over by 100,000 video professionals and content creators from 150+ countries looking for the chance to get hands-on experience with emerging technologies and the latest innovations in video production and delivery. NAB 2016 is right around the corner, and Ustream has had the privilege of attending the big event 4 years in a row. Let’s take a look back at some of the highlights and the history of Ustream at NAB.

Ustream started our presence at NAB way back in 2012 by providing live coverage for our partners at NewTek, TWiT & Panasonic, and combined all of the action into one super channel that helped viewers keep up on all of the excitement at the show.

The theme of NAB 2013 was the evolution of broadcast media and how social media and consumer engagement are changing the industry landscape. Ustream’s CEO & Founder, Brad Hunstable, had the pleasure of hosting a session about the “Reinvention of Live Media” that went into depth about how Ustream stays ahead of the curve of the new age of real time consumer behavior. We also sponsored the Technology Awards Luncheon, where the National Association of Broadcasters gave recognition to some of the most innovative people in the video community.

In 2014, Teradek broadcasted coverage from NAB and updated online audiences on the latest and greatest announcements from the world’s largest broadcast equipment manufacturers and industry influencers. The live show was streamed exclusively on Ustream for 32 hours over the course of 4 days and offered Spanish captioning for the very first time. Special segments were provided by a variety of partners, including Streaming Media, Philip Bloom, & Broadcast Beat, who each offered their own unique perspective on the industry and provided a well-rounded report of everything happening on the show floor.

NAB 2015 was also the debut of the Online Video Conference, where executives from digital media firms gathered to discuss issues such as online original content, the migration to over-the-top (OTT) content and online advertising metrics. This set the stage for Ustream to show off our latest solution for marketers, Ustream Pro Broadcasting Video Marketing Module, along with our platforms for internal communicators and broadcasters: Ustream Align and Ustream Pro Broadcaster in addition to being the exclusive onsite live streaming provider for clients such as Teradek, Maxon, Sony, Adobe and JVC.

What does Ustream have in store for NAB 2016? Well, you are going to have to join us in Las Vegas to find out! Register today using the code “LV7669” to get access to the show for free until April 1st. We look forward to seeing you there!


Video Terms: Live Streaming & Hosting Glossary

Posted on by

Video Glossary of terms for live streaming and video hostingA streaming media and video glossary that contains definitions of video terms, technologies and techniques related to live streaming, broadcasting and video hosting.

These video terms are relevant for both new techniques and legacy methods, which still have ramifications today when handling older media. The glossary will be continuously updated as the industry evolves.

# | A | B | C | D | EH | I | K | L | M | P | R | S | T | U | V

2 3 Pull Down (aka: Three-two Pulldown)

A process used to convert material from film to interlaced NTSC display rates, from 24 to 29.97 frames per second. This is done by duplicating fields, 2 from one frame and then 3 from the next frame or vice-a-versa.

608 Captions (aka: line 21 captions, EIA-608, CEA-608)

These captions contain white text against a black box that surrounds the text. It appears on top of video content and has support for four caption tracks.

708 Captions (aka: CEA-708)

These captions were designed with digital distribution of content in mind. They are a more flexible version of captions over the older 608 caption approach, allowing for more caption tracks, more character types and the ability to modify the appearance.

AAC (aka: Advanced Audio Coding)

This audio coding format is lossy, featuring compression that does impact the audio quality. It offers better compression and increased sample frequency when compared to MP3.

AC-3 (aka: Audio Codec 3, Advanced Codec 3, Acoustic Coder 3)

A Dolby Digital audio format found on many home media releases. Dolby Digital is a lossy format, featuring compression that will impact audio quality. The technology is capable of utilizing up to six different channels of sound. The most common surround experience is a 5.1 presentation.

Adaptive Streaming (aka: Adaptive Bitrate Streaming)

This streaming approach offers multiple streams of the same content at varying qualities. These streams are served inside the same video player and often differ based on bitrate and resolution. Ideally the player should serve the viewer the bitrate most appropriate to their setup, based on qualifications like download speed.

B-frames (aka: bi-directional Predicted Frames)

These frames follow another frame and only contain part of the image in a video. B-frames look backward and forward to a previous or later p-frame or keyframe (i-frame) and only contain new information not already presented.


In relation to video, bandwidth is used to describe an internet connection speed or as a form of consumption in relation to web hosting. For speed, it is used as a point of reference for an internet connection. When it comes to streaming content, this is important as a viewer has to have enough bandwidth in order to watch. For web hosting, bandwidth can be used as a measure of consumption.

Bit Rate (aka: data rate or bitrate) 

The amount of data per unit of time. For streaming, this is in the context of video and audio content and often given in a unit of seconds, often expressed in terms of kilobits (kbps) and megabits (Mbps).


Video streaming involves sending over video chunks of data to an end user. The video player will then create a buffer involving chunks that have not yet been viewed. This process is intended to let the viewer watch from the buffer in the event a video chunk is lost. Ideally the lost video chunk will be received before the buffer is emptied, causing no disruption in viewing. However, it’s quite possible for the viewer to have a connection speed that is poor enough that the video chunk does not arrive before the buffer is empty. If this occurs the video content will stop and the player will generally wait until more data is received. This will generally provide a buffering message while the player will wait for the lost video chunk and will attempt to rebuild the buffer.      

CDN (aka: Content Delivery Network)

These are large networks of servers that have copies of data, pulled from an origin server, and are often geographically diverse in their location. The end user pulls the needed resources from the server that is closest to them, which is called an edge server. This process is done to decrease any delays that might be caused due to server proximity to the end user, as larger physical distances will result in longer delays, and ideally avoid congestion issues. Due to the resource intensive process of video streaming, most streaming platforms utilize a CDN.

CRTP (aka: Compressed Real Time Transport Protocol)

This is a compressed form of RTP. It was designed to reduce the size of the headers for the IP, UDP (User Datagram Protocol) and RTP. For best performance, it needs to work with a fast and dependable network or can experience long delays and packet loss.


Deinterlacing filters combine the two alternating fields found in interlaced video to form a clean shot in a progressive video. Without deinterlacing, the interlaced content will often display motion with a line-like appearance.

Embedded Player

This is a media player that is enclosed in a web source, which can range dramatically from being seen in an HTML document on a website to a post on a forum. Players will vary based on appearance, features and available end user controls. An iframe embed, which can be used to embed a variety of content, is one of the most common methods of embedding a video player.

H.264 (aka MPEG-4 Part 10, Advanced Video Coding, MPEG-4 AVC)

A video compression technology, commonly referred to as a codec, that is defined in the MPEG-4 specification. The container format for H.264 is defined as MP4.


Adobe’s HTTP Dynamic Streaming is an HTTP-based technology for adaptive streaming. It segments the video content into smaller video chunks, allowing switching between bit rates when viewing.


Apple’s HTTP Live Streaming is an adaptive streaming technology. It functions by breaking down the stream into smaller MPEG2-TS files. These files vary by bitrate and often times resolution, and ideally are served to the viewer based on the criteria of their setup such as download speed.

Interlaced Video

A technique used for television video formats, such as NTSC and PAL, in which each full frame of video actually consists of alternating lines taken from two separate fields captured at slightly different times. The two fields are then interlaced or interleaved into the alternating odd and even lines of the full video frame. When displayed on television equipment, the alternating fields are displayed in sequence, depending on the field dominance of the source material.

IP Camera (aka: Internet Protocol Camera)

A digital camera that can both send and receive data via the Internet or computer network. These cameras are designed to support a limited number of users that could connect directly to the camera to view. They are RTSP (Real Time Streaming Protocol) based, and for that reason are not largely supported by broadcasting platforms without using special encoders.

Keyframe (aka: i-frame, Intra Frame)

This is the full frame of the image in a video. Subsequent frames only contain the information that has changed between frames. This process is done to compress the video content.

Key Frame Interval (aka: Keyframe Interval)

Set inside the encoder or when the video is being encoded, the key frame interval controls how often a keyframe is created in the video. The keyframe is a full frame of the image. Other frames will generally only contain the information that has changed.

Live Streaming

Relates to media content being delivered live over the Internet. The process involves a source (video camera, screen captured content, etc), an encoder to digitize the feed (Teradek VidiU, Telestream Wirecast, etc), and a platform such as Ustream or another provider that will typically take the feed and publish it over a CDN (Content Delivery Network). Content that is live streamed will typically have a delay in a magnitude of seconds compared to the source.

Lossless Compression

Lossless encoding is any compression scheme, especially for audio and video data, that uses a nondestructive method that retains all the original information. Consequently, lossless compression does not degrade sound or video quality meaning the original data could be completely reconstructed from the compressed data.

Lossy Compression

Lossy encoding is any compression scheme, especially for audio and video data, that removes some of the original information in order to significantly reduce the size of the compressed data. Lossy image and audio compression schemes such as JPEG and MP3 try to eliminate information in subtle ways so that the change is barely perceptible, and sound or video quality is not seriously degraded.

MPEG-DASH (aka: Dynamic Adaptive Streaming over HTTP)

An adaptive bitrate streaming technology. Contains both the encoded audio and video streams along with manifest files that identify the streams. This process involves breaking down the video stream into small HTTP sequence files. These files allow the content to be switched from one state to another.

MPEG-TS (aka: Transport Stream, MTS, TS)

A container format that hosts packetized elementary streams for transmitting MPEG video muxed with other streams. It can also have separate streams for video, audio and closed captions. It’s commonly used for digital television and streaming across networks, including the internet.

P-frames (aka: Predictive Frames, Predicted Frames)

The p-frame follows another frame and only contain part of the image in a video. P-frames look backwards to a previous p-frame or keyframe for redundancies.

Program Stream (aka: PS)

These streams are optimized for efficient storage. They contain elementary streams without an error detection or correction process. It assumes the decoder has access to the entire stream for synchronization purposes. Consequently, programs streams are often found in physical media formats, such as DVDs or Blu-rays.

Progressive Video

A video track that consists of complete frames without interlaced fields. Each individual frame is a coherent image at a single moment in time. This means a video could be paused and the entire image could be seen. All streaming files are progressive, and this should not to be confused with the process of keyframes and p or b frames.

Reverse Telecine (aka: Inverse Telecine, IVTC)

This is a process used to reverse the effect of 3 : 2 pull down. This is achieved through removing the extra fields that were inserted to stretch 24 frame per second film to 29.97 frames per second interlaced video.

RTMP (aka: Real Time Messaging Protocol)

Is a TCP-based protocol that allows for low-latency communication. In the context of video, it allows for delivering live and on demand media content that can be viewed over Adobe Flash applications, although the source can be modified for other playback methods.

RTP (aka: Real Time Transport Protocol)

A network protocol designed to deliver video and audio content over IP networks and runs on top of UDP. The components of RTP include a sequence number, a payload identification, frame indication, source identification, and intramedia synchronization.

RTSP (aka: Real Time Streaming Protocol)

A method for streaming video content through controlling media sessions between end points. This protocol uses port 554. Using this method, data is often sent via RTP. RTSP is a common technology found in IP cameras. However, some encoders, like Wirecast, can actually take the IP camera feed and deliver it in an RTMP format.


Microsoft’s Silverlight is both a video playback solution and an authoring environment. The user interface and description language is Extensible Application Markup Language (XAML). The technology is natively compatible with the Windows Media format.

Smooth Streaming (aka: IIS)

Microsoft’s Smooth Streaming for Silverlight is an adaptive bitrate technology. It’s a hybrid media delivery method that is based on HTTP progressive download. The downloads are sent in a series of small video chunks. Like other adaptive technology, Smooth Streaming offers multiple encoded bitrates of the same content that can then be served to a viewer based on their setup.

Streaming Video (aka: Streaming Media)

Refers to video and/or audio content that can be played directly over the Internet. Unlike progressive download, an alternative method, the content does not need to be downloaded onto the device first in order to be viewed or heard. It allows for the end user to begin watching as additional content is constantly being transmitted to them.


The process of transcoding involves converting one video type into another format. This is often done to make a file compatible over a particular service.


Involves changing a video source from one bitrate to a different one. This process is often done to accommodate adaptive bitrate technologies, generating lower quality bitrates.

UDP (aka: User Datagram Protocol)

The most universal way to transmit or receive audio or video via a network card or modem. In terms of real-time protocol, RTMP (Real Time Messaging Protocol) is based on TCP (Transmission Control Protocol), which led to the creation of RTMFP (Real Time Media Flow Protocol) that is based on UDP.

Video Compression

This process uses codecs to present video content in a less resource intensive format. Due to the high data rate of uncompressed video, most video content is compressed. Compression techniques can feature overt processes such as image compression or sophisticated techniques such as inter frame, which will look for redundancies between different frames in the video and only present changes via delta frames from a keyframe point.

Video Encoding

A process to reduce the size of video data, often times with audio data included, through the use of a compression scheme. This compression can be for the purpose of storage, known as program stream (PS), or for the purpose of transmission, known as transport stream (TS).

Video Scaling (aka: Trans-sizing)

A process to either reduce or enlarge an image or video sequence by squeezing or stretching the entire image to a smaller or larger image resolution. While this sometimes can just involve a resolution change, it can also involve changing the aspect ratio, like converting a 4:3 image to a “widescreen” 16:9 image.

VOD (aka: Video On Demand)

VOD refers to content that can be viewed on demand by an end user. The term is commonly used to differentiate between live content, as VODs are previously recorded. That said, content can be presented in a way that is not on demand but using previously recorded content, such as televised programming that does not give the end user control over what they are trying to watch.



Please visit our Support Center or Contact Sales for Ustream compatibility questions regarding the terms found in this video glossary.

Live at IBM Interconnect 2016

Posted on by

IBM Interconnect is the premiere event to learn how to get the most out of your existing investments with hands-on training in cloud and mobile solutions built for security, powered by cognitive, and equipped with advanced analytics. And now that Ustream is a part of the Cloud Video Services unit, we had the privilege of being a part of the event.

In addition to having the opportunity to meet our new IBM friends & family face to face, we were also excited for the chance to get hands on expierence and an insiders look into some of the amazing technology that IBM is a part of. From dancing robots to a BB8 droid that you can control with your mind, the future of IBM is evolving. We may be a bit biased, but the biggest stars of the show were located in the Cloud Video Services unit, specifically the folks at Clearleap, Aspara, Cleversafe and of course Ustream.

We were also honored to have the opportunity to hit the main stage at the Cloud/Mobile expo Theatre, where our VP of Product, Alden Fertig addressed the community and discussed how video has become a global medium for communication for entertainment, information and applications with his presentation: “Video Has Become a ‘First Class’ Data Type in Enterprise”.

Thank you for joining us at IBM Interconnect 2016, and we look forward to seeing you again next year! In the meantime, reach out to one of our sales representatives to learn more about how the IBM Video Cloud can help you and your business make the big leap into the future.


The New Viewing Experience is Here

Posted on by

We’re happy to announce that the new Viewing Experience that we’ve announced in December is now publicly available for all broadcasters!

The new channel design comes with lots of benefits:

  • Responsive layout looking great on all screen sizes
  • Large player and more room for the chat
  • Large video gallery
  • Interactive Chat & Social Stream
  • Description now available for VODs
  • Less ads on the page and a lot more room for your content.

Customization options include:

  • Cover image
  • About section with rich text formatting, including images and links
  • Links to external websites (Facebook, Twitter, Paypal etc)
  • Links to your other channels on Ustream

To see the new look in action, check out these channels that are already on the new design:

A great thing about the new design is that all customization will carry over to mobile. The iOS and Android app updates will be released in a few weeks.

From now on all new channels get the new design by default. If you have an existing channel you can choose to migrate to the new design manually until April 2nd, 2016 when the old channel design will be discontinued.

Visit your Dashboard to see what’s new and don’t forget to leave us feedback!

Westminster Kennel Club Dog Show 2016

Posted on by

Westminster Kennel Club Dog Show

Connecting people through the power of live video technology is something we take very seriously here at Ustream. After all, by 2017 80% of all the world data will be video! But there is only one thing that we feel even more passionate about, and that is our love for dogs. At any given moment there are no less than 2 canines lounging around the Ustream office, and each one is treated as an equal member of the team. Even as we were in discussions with IBM, one thing was very clear. The dogs stay. Which is probably why the Westminster Kennel Club Dog Show is truly the must watch event of the year. More than 2,000 dogs are vying for the ultimate title of best in show, and we all have our favorites.

The festivities kicked off Monday 2/15 at Madison Square Garden in New York City with the winners of the herding, non-sporting, toy and hound groups selected. Tonight we will learn who is the top dog in the sporting, working and terrier groups, leading up to the highly anticipated Best in Show announcement! And since an event this size entails far more action than can be captured on a single channel, the Westminster Kennel Club has dedicated 8 Ustream channels to bring you live action from every single ring.

Tune in LIVE tonight at 5:00 PST to learn who takes home Best in Show! 

A Brief History of Streaming Video

Posted on by

Video is everywhere, and by 2017 80% of all the world data will be video. It feels like the world suddenly discovered live streaming, which is something we’ve been doing here at Ustream since 2007. In fact, don’t tell Periscope or Meercat, but live streaming on mobile devices isn’t even new. We introduced our first broadcast-capable apps for iOS and Android all the way back in 2009 (really!) and since then we’ve helped Fortune 500 companies launch new products, broadcast concerts from famous recording artists, help citizen journalists document events, and family members share special moment via live, on-the-scenes broadcasts.

Now don’t get us wrong … we’re very happy that live streaming is getting a lot of buzz and that everyone decided to join the party. We’d like to take a step back and show you a brief history of streaming video and how we got to where we are now!

A Brief History of Streaming Video


Live Demo: Getting Started with Ustream

Posted on by


No matter what size your company is, it’s a no brainer that integrating live and recorded video into your communication strategy is the key to success. But where do you start? Luckily our technical Sales Engineer, Adam Pastana, is here to save the day and will walk you through getting started with Ustream, and cover the basics on how to quickly get up and running with the Ustream platform. This live demo will include:

  • Uploading and managing videos
  • Creating live streams
  • Scheduling events
  • Embedding the video player on your site
  • Understanding your viewership analytics

Leveraging Ustream as your all in one video solution is easier than you think! Join us live on Thursday, February 11th, 2016 @ 11am PST | 2pm EST