Keyframes, InterFrame & Video Compression

Posted on by

The default mental image of video compression involves unwanted video artifacts, like pixelation and blockiness in the image. This sells short, though, the complexity that actually goes into compressing video content. In particular, it overlooks a fascinating process called interframe, which involves keyframes and delta frames to intelligently compress content in a manner that is intended to go unnoticed.

This article describes this process in detail, while also giving best practices and ideal encoder settings that you can apply to your live streaming at Ustream.

Understanding Video Frames

There are a lot of terms and aspects of streaming technology that can be taken for granted. As someone matures as a broadcaster, it pays to understand elements in greater detail to learn why a process is done and also optimal settings.

For example, a keyframe is something a few broadcasters have seen mentioned before, or saw the setting in an encoder like Wirecast, without quite realizing what it is and how beneficial this process is for streaming. A keyframe is an important element, but really only part of a longer process that helps to reduce the bandwidth required for video. To understand this relation, one first needs to understand video frames.

Starting at a high level, most probably realize that video content is made up of a series of frames. Usually denoted as FPS (frames per second), each frame is a still image that when played in sequence creates a moving picture. So content created that uses a FPS of 30 means there are 30 “still images” that will play for every second of video.

An Opportunity To Compress: InterFrame

On an average video, if someone were to take 90 consecutive frames and spread them out they will see a lot elements that are pretty much identical. For example, if someone is talking while standing next to a motionless plant it’s unlikely that information related to that plant will change. As a result, that’s a lot of wasted bandwidth used just to convey that something hasn’t changed.

Still frame for keyframeConsequently, when looking for effective ways to compress video content, frame management became one of the cornerstone principles. So if that plant in the example is not going to change, why not just keep using the same elements in some of the subsequent frames to reduce space?

This realization gave birth to the idea of interframe prediction. This is a video compression technique that divides frames into macroblocks and then looks for redundancies between blocks. This process works through using keyframes, also known as an i-frame or Intra frame, and delta frames, which only store changes in the image to reduce redundant information. These collection of frames are often referred to by the rather non-technical sounding name of a “group of pictures”, abbreviated as GOP. A video codec, used for encoding or decoding a digital data stream, all have some form of interframe management. H.264, MPEG-2 and MPEG-4 all use a three frame approach that includes: keyframes, p-frames, and b-frames.

What Is A Keyframe?

The keyframe (i-frame) is the full frame of the image in a video. Subsequent frames, the delta frames, only contain the information that has changed. Keyframes will appear multiple times within a stream, depending on how it was created or how it’s being streamed.

If someone were to Google “keyframe”, they are likely to find some results related to animation and video editing. In this instance, we are using the word keyframe in how it relates to video compression and its relationship to delta frames.Keyframe and a P or B frame example

How Do P-frames Work?

Also know as predictive frames or predicted frames, the p-frame follows another frame and only contain part of the image in a video. It is classified as a delta frame for this reason. P-frames look backwards to a previous p-frame or keyframe (i-frame) for redundancies. The amount of image presented in the p-frame depends on the amount of new information contained between frames.

For example, someone talking to the camera in front of a static background will likely only contain information related to their movement. However, someone running across a field as the camera pans will have a great deal more information with each p-frame to match both their movement and the changing background.

What Are B-frames And How Do They Differ From P-frames?

Also known as bi-directional predicted frames, the b-frames follow another frame and only contain part of the image in a video. The amount of image contained in the b-frame depends on the amount of new information between frames.

Unlike p-frames, b-frames can look backward and forward to a previous or later p-frame or keyframe (i-frame) for redundancies. This makes b-frames more efficient as opposed to p-frames, as they are more likely to find redundancies. However, b-frames are not used when the encoding profile is set to baseline inside the encoder. This means the encoder has to be set at an encoding profile above baseline, such as “main” or “high”.

How Do You Set A Keyframe?

In regards to video compression for live streaming, a keyframe is set inside the encoder. This is configured by an option sometimes called a “keyframe interval” inside the encoder.

The keyframe interval controls how often a keyframe (i-frame) is created in the video. The higher the keyframe interval, generally the more compression that is being applied to the content, although that doesn’t mean a noticeable reduction in quality. For an example of how keyframe intervals work, if your interval is set to every 2 seconds, and your frame rate is 30 frames per second, this would mean roughly every 60 frames a keyframe is produced.

The term “keyframe interval” is not universal and most encoders have their own term for this. Adobe Flash Media Live Encoder (FMLE) and vMix, for example, uses the term “keyframe frequency” to describe this process. Other programs and services might call the interval the “GOP size” or “GOP length”, going back to the “Group of Pictures” abbreviation.

Choosing A Keyframe Interval At The Encoder Level

In terms of setting a keyframe interval, it varies from encoder to encoder.

For FMLE, this option, denoted as “Keyframe Frequency”, is found in the software encoder by clicking the wrench icon to the right of format.

In Wirecast, this is set from the Encoder Presets menu and the option is called “key frame every”. Wirecast is different as the interval is actually denoted in frames. So for a 30 FPS broadcast, setting the “key frame every” 60 frames would roughly give a keyframe interval of 2 seconds, as you have 30 frames every second.

For the vMix encoder, one needs to first click the gear icon near streaming, which opens the Streaming Settings. Near the quality option here is another gear icon and clicking this will open up a menu that has the ability to modify the “Keyframe Frequency”.

How to setup keyframe interval in OBS

Setting the keyframe interval in version v0.542b of Open Broadcast Software (OBS)

In Open Broadcast Software (OBS), for versions after v0.55b, the keyframe interval can be set in the Settings area under Advanced. For versions of OBS before v0.542b, it’s not very clear how to modify the keyframe interval, but this is actually a component of Settings. Once there, go to Advanced and then select “Custom x264 Encoder Settings”. In this field one needs to enter in the following string: “keyint=XX” with the XX being the number of frames until a keyframe is triggered. Like Wirecast, if a keyframe interval of 2 seconds is desired and the FPS is 30 seconds enter the following: “keyint=60”.

For XSplit, keyframe interval is a component of the channel properties. Under the Video Encoding area, one will find a listing that says “Keyframe Interval (secs)”. To the far right of this is a gear icon. Clicking the gear will launch a “Video Encoding Setup” popup. This will allow someone to specify the keyframe interval in seconds.

Relationship Between Keyframes And Bitrates

Mileage in this explanation might vary, as encoders do manage bitrates and keyframes differently. Using an encoder like Wirecast, one might notice that broadcasting someone talking against a still background has “higher quality” compared to broadcasting someone jumping up and down against a moving background. This can be reproduced when using the same exact average bitrate and keyframe interval between them. The reason for this is because, in part, due to the fact that the delta frames have a ton of information to share in the jumping example. There is very little redundancy, meaning a lot more data that needs to be conveyed on each delta frame.

If you have an encoder like Wirecast, though, it’s trying its hardest to keep the stream around that average bitrate that was selected. Consequently, the added bandwidth that is needed for the additional information contained in the delta frames results in the quality being reduced to try and keep the average bitrate around the same level.

What’s The Best Setting For A Keyframe Interval?

There has never been an industry standard, although 10 seconds is often mentioned as a good keyframe interval, even though that’s no longer suggested for streaming. The reason it was suggested is because, for a standard 29.97 FPS file, the resulting content is responsive enough to support easy navigation from a preview slider. To explain more, a player can not start playback on a p-frame or b-frame. So using the 10 second example, if someone tried to navigate to a point that was 5 seconds into feed it would actually shift 5 seconds back to the nearest keyframe and begin playback. This was considered a good trade off for smaller bandwidth consumption, although for reference DVDs elected to use something much smaller than 10 seconds.

However, for live streaming, the recommended level has drastically dropped. The reason for this is the advent of adaptive bitrate streaming. For those unfamiliar with adaptive streaming, this technology enables a video player to dynamically change between available resolutions and/or bitates based upon the viewer trying to watch. So someone with a slower download speed will be given a lower bitrate version, if available. Other criteria, like playback window size, will also impact what bitrate is given.

Player displaying a keyframeTrue adaptive streaming doesn’t just make this check when the video content initially loads, though, but can also alter the bitrate based on changes on the viewer’s side. For example, if a viewer was to move out of range of a Wi-Fi network on their mobile, they will start using their normal cellular service which is liable to result in a slower download speed. As a result, the viewer might be trying to watch content that is too high of a bitrate versus their download speed. The adaptive streaming technology should realize this discrepancy and make the switch to a different bitrate.

The keyframe interval comes into action here as making that switch occurs during the next keyframe. So if someone is broadcasting with a 10 second interval, that means it could take up to 10 seconds before the bitrate and resolution might change. That length of time means the content might buffer on the viewer’s side before the change occurs, something that could lead to viewer abandonment.

Because of this, it’s recommended to have your keyframe interval set at 2 seconds for live streaming. This produces a result where the video track can effectively change bitrates often before the user might experience buffering due to a degradation in their download speed.

What’s An IDR-Frame?

We are looping at this point, but it pays to understand p-frames, b-frames and get a crash course in adaptive streaming before talking about what is an IDR-frame, or Instantaneous Decode Refresh frame. These are actually keyframes and each keyframe can either be IDR based or non-IDR based. The difference between the two is that the IDR based keyframe works as a hardstop. An IDR-frame prevents p-frames and b-frames from referencing frames that occurred before the IDR-frame. A non-IDR keyframe will allow those frames to look further back for redundancies.

On paper, a non-IDR keyframe sounds ideal: it can greatly reduce file size by being allowed to look at a much larger sample of frames for redundancies. Unfortunately, a lot of issues arise with navigation and the feature does not play nicely with adaptive streaming. For navigation, let’s say someone starts watching 6 minutes into a stream. That’s going to cause issues as the p-frames and b-frames might be referencing information that was never actually accessed by the viewer. For adaptive streaming, a similar issue can arise if the bitrate and resolution are changed. This is because the new selection might reference data that the viewer watched at a different quality setting and is no longer parallel. For these reasons, it’s always recommended to make keyframes IDR based.

Generally, encoders will either provide the option to turn on or off IDR based keyframes or won’t give the option at all. For those encoders that do not give the option, it’s almost assured to be because the encoder is setup to only use IDR-frames.

Should Someone Use An “Auto” Keyframe Setting?

In short: no.

Auto keyframe settings, in principal, are pretty great. They will manually force a keyframe during a scene change. For example, if you switch from a PowerPoint slide to an image of someone talking in front of a camera that would force a new keyframe. That’s desirable as the delta frames would not have much to work with, unable to find redundancies between the PowerPoint slide and the image from the camera.

Unfortunately, this process does not work with some adaptive streaming technologies, most notably HLS. The HLS process requires the keyframes to be predictable and in sync. Using an “auto” setting will create variable intervals between keyframes. For example, the time between keyframes might be 7 seconds and then later it might be 2 seconds if a scene change occurs quickly.

Setting the Keyframe interval in OBS

Setting a whole number in OBS v0.55b to disable auto switching

For most encoders, to disable “auto change” or “scene change detect” features this often just means denoting a keyframe interval. For example, in OBS if a keyframe interval is set at 0 seconds then the auto feature will kick in. Placing any number in there, like 1 or 2, will disable the auto feature.

If the encoder, like Wirecast, has an option for “keyframe alignment”, it should be known that this is not the same process. Having keyframes aligned is a process for creating specific timestamps and is best suited for keeping multiple bitrates that the broadcaster is sending through the encoder in sync.

Perfecting A Keyframe Strategy

With the advent of adaptive bitrates, the industry is at an odd juncture where there is a pretty clear answer on best practices for keyframes and live streaming. That strategy includes:

  • Setting a keyframe interval at around 2 seconds
  • Disabling any “auto” keyframe features
  • Utilizing IDR based keyframes
  • Using an encoding profile higher than baseline to allow for b-frames

This strategy allows for easy navigation of content, for on demand viewing after a broadcast, while still reaping the benefits of frame management and saving bandwidth on reducing redundancies. It also supports adaptive btirate streaming, an important element of a successful live broadcast and being able to support viewers with slower connections.

Please Contact Sales for more questions on interframe and how Ustream can help you deliver high quality video alongside lower bitrate options through cloud transcoding.

 

Disclaimer: This article is aimed at helping out live broadcasters or at least those who plan for a healthy video on demand strategy over streaming. The answer to many of these questions would of course be different depending on playback method. For example, for the intention of creating video content that might be played via a video file, the “scene change” option is just one example of something that would be ideal. Some of these techniques only becomes undesirable in relation to streaming when using adaptive technology.

The History of Ustream at NAB

Posted on by

ustream at nab

From April 16th through the 21st, the Las Vegas Convention Center will be taken over by 100,000 video professionals and content creators from 150+ countries looking for the chance to get hands-on experience with emerging technologies and the latest innovations in video production and delivery. NAB 2016 is right around the corner, and Ustream has had the privilege of attending the big event 4 years in a row. Let’s take a look back at some of the highlights and the history of Ustream at NAB.

2012
Ustream started our presence at NAB way back in 2012 by providing live coverage for our partners at NewTek, TWiT & Panasonic, and combined all of the action into one super channel that helped viewers keep up on all of the excitement at the show.

2013
The theme of NAB 2013 was the evolution of broadcast media and how social media and consumer engagement are changing the industry landscape. Ustream’s CEO & Founder, Brad Hunstable, had the pleasure of hosting a session about the “Reinvention of Live Media” that went into depth about how Ustream stays ahead of the curve of the new age of real time consumer behavior. We also sponsored the Technology Awards Luncheon, where the National Association of Broadcasters gave recognition to some of the most innovative people in the video community.

2014
In 2014, Teradek broadcasted coverage from NAB and updated online audiences on the latest and greatest announcements from the world’s largest broadcast equipment manufacturers and industry influencers. The live show was streamed exclusively on Ustream for 32 hours over the course of 4 days and offered Spanish captioning for the very first time. Special segments were provided by a variety of partners, including Streaming Media, Philip Bloom, & Broadcast Beat, who each offered their own unique perspective on the industry and provided a well-rounded report of everything happening on the show floor.

2015
NAB 2015 was also the debut of the Online Video Conference, where executives from digital media firms gathered to discuss issues such as online original content, the migration to over-the-top (OTT) content and online advertising metrics. This set the stage for Ustream to show off our latest solution for marketers, Ustream Pro Broadcasting Video Marketing Module, along with our platforms for internal communicators and broadcasters: Ustream Align and Ustream Pro Broadcaster in addition to being the exclusive onsite live streaming provider for clients such as Teradek, Maxon, Sony, Adobe and JVC.

What does Ustream have in store for NAB 2016? Well, you are going to have to join us in Las Vegas to find out! Register today using the code “LV7669” to get access to the show for free until April 1st. We look forward to seeing you there!

REGISTER NOW

Video Terms: Live Streaming & Hosting Glossary

Posted on by

Video Glossary of terms for live streaming and video hostingA streaming media and video glossary that contains definitions of video terms, technologies and techniques related to live streaming, broadcasting and video hosting.

These video terms are relevant for both new techniques and legacy methods, which still have ramifications today when handling older media. The glossary will be continuously updated as the industry evolves.


# | A | B | C | D | EH | IJ | K | L | MO | P | R | S | T | U | V


2 3 Pull Down (aka: Three-two Pulldown)

A process used to convert material from film to interlaced NTSC display rates, from 24 to 29.97 frames per second. This is done by duplicating fields, 2 from one frame and then 3 from the next frame or vice-a-versa.

608 Captions (aka: line 21 captions, EIA-608, CEA-608)

These captions contain white text against a black box that surrounds the text. It appears on top of video content and has support for four caption tracks.

708 Captions (aka: CEA-708)

These captions were designed with digital distribution of content in mind. They are a more flexible version of captions over the older 608 caption approach, allowing for more caption tracks, more character types and the ability to modify the appearance.

AAC (aka: Advanced Audio Coding)

This audio coding format is lossy, featuring compression that does impact the audio quality. It offers better compression and increased sample frequency when compared to MP3.

AC-3 (aka: Audio Codec 3, Advanced Codec 3, Acoustic Coder 3)

A Dolby Digital audio format found on many home media releases. Dolby Digital is a lossy format, featuring compression that will impact audio quality. The technology is capable of utilizing up to six different channels of sound. The most common surround experience is a 5.1 presentation.

Adaptive Streaming (aka: Adaptive Bitrate Streaming)

This streaming approach offers multiple streams of the same content at varying qualities. These streams are served inside the same video player and often differ based on bitrate and resolution. Ideally the player should serve the viewer the bitrate most appropriate to their setup, based on qualifications like download speed.

Aspect Ratio

This relates to width and the height of a video which is then expressed as a ratio. The most common aspect ratios for video are 4:3 and 16:9. This is sometimes expressed as 1.33:1 (4:3 or “full screen” which came from fitting older TV sets) and 1.78:1 (16:19 or widescreen). For film, other common aspects include 1.85:1 or 2.35:1 (CinemaScope, TohoScope and other cinematic formats)

B-frames (aka: bi-directional Predicted Frames)

These frames follow another frame and only contain part of the image in a video. B-frames look backward and forward to a previous or later p-frame or keyframe (i-frame) and only contain new information not already presented.

B-roll

This is supplemental footage that offers additional options from editors when creating a final cut of a video. This can be audience shots, different angles and more. It is often used to spice up video presentations, for example a presentation at a trade show might be spiced up by inserting b-roll footage of the booth to show activity. It is also commonly utilized in interviews as content to cut away to. The term originates from traditional film where editors used to utilize a roll “A” and roll “B” of identical footage to cut from.

Bandwidth

In relation to video, bandwidth is used to describe an internet connection speed or as a form of consumption in relation to web hosting. For speed, it is used as a point of reference for an internet connection. When it comes to streaming content, this is important as a viewer has to have enough bandwidth in order to watch. For web hosting, bandwidth can be used as a measure of consumption.

Bit Rate (aka: data rate or bitrate) 

The amount of data per unit of time. For streaming, this is in the context of video and audio content and often given in a unit of seconds, often expressed in terms of kilobits (kbps) and megabits (Mbps).

Bounce Light

A technique that involves bouncing light off a reflective surface on the subject. This is done in order to achieve a softer, less harsh lighting effect as opposed to shining the light directly on the subject. It can also achieve a more natural, even look to the subject as well.

Buffering

Video streaming involves sending over video chunks of data to an end user. The video player will then create a buffer involving chunks that have not yet been viewed. This process is intended to let the viewer watch from the buffer in the event a video chunk is lost. Ideally the lost video chunk will be received before the buffer is emptied, causing no disruption in viewing. However, it’s quite possible for the viewer to have a connection speed that is poor enough that the video chunk does not arrive before the buffer is empty. If this occurs the video content will stop and the player will generally wait until more data is received. This will generally provide a buffering message while the player will wait for the lost video chunk and will attempt to rebuild the buffer.      

CDN (aka: Content Delivery Network)

These are large networks of servers that have copies of data, pulled from an origin server, and are often geographically diverse in their location. The end user pulls the needed resources from the server that is closest to them, which is called an edge server. This process is done to decrease any delays that might be caused due to server proximity to the end user, as larger physical distances will result in longer delays, and ideally avoid congestion issues. Due to the resource intensive process of video streaming, most streaming platforms utilize a CDN.

CRTP (aka: Compressed Real Time Transport Protocol)

This is a compressed form of RTP. It was designed to reduce the size of the headers for the IP, UDP (User Datagram Protocol) and RTP. For best performance, it needs to work with a fast and dependable network or can experience long delays and packet loss.

Deinterlace

Deinterlacing filters combine the two alternating fields found in interlaced video to form a clean shot in a progressive video. Without deinterlacing, the interlaced content will often display motion with a line-like appearance. Read more on deinterlacing for streaming.

Depth of Field (aka DOF)

This relates to the nearest and furthest objects in view that appear to be in focus. As a result, a deep depth of field will showcase nearly everything inside the frame in sharp focus. A shallow depth of field, on the other hand, will only have a narrow range of focus inside the video. For example, an interview that has the individual in focus but the background out of focus would be a shallow depth of field.

Digital Zoom

Unlike using an actual optical lens change, this process gives the appearance of zooming in through cropping the image to a smaller portion of the available video frame. This process maintains the same aspect ratio and gives the illusion of zooming in, but does involve reducing the quality of the image to achieve this effect.

eCDN (aka Enterprise Content Delivery Network)

Generally an on-premise solution that empowers scaling video assets around a central location. This can include a school or office and reduces strain on the internal connection. For example, rather than having to send 100 high definition live streams to one office and greatly taxing the available download speed, eCDN would facilitate being sent one version and then distributing that to reduce strain on the network.

Embedded Player

This is a media player that is enclosed in a web source, which can range dramatically from being seen in an HTML document on a website to a post on a forum. Players will vary based on appearance, features and available end user controls. An iframe embed, which can be used to embed a variety of content, is one of the most common methods of embedding a video player.

Encoding

Takes source content and converts it into a digital format. Often used in the context of encoders, which can be software or hardware based, that are used for taking live video sources and converting that content to be live streamed in a digital format. Often used interchangeably with transcoding, encoding by definition takes an analog source and digitizes that content.

H.264 (aka MPEG-4 Part 10, Advanced Video Coding, MPEG-4 AVC)

A video compression technology, commonly referred to as a codec, that is defined in the MPEG-4 specification. The container format for H.264 is defined as MP4.

HDS

Adobe’s HTTP Dynamic Streaming is an HTTP-based technology for adaptive streaming. It segments the video content into smaller video chunks, allowing switching between bit rates when viewing.

HLS

Apple’s HTTP Live Streaming is an adaptive streaming technology. It functions by breaking down the stream into smaller MPEG2-TS files. These files vary by bitrate and often times resolution, and ideally are served to the viewer based on the criteria of their setup such as download speed.

Interlaced Video

A technique used for television video formats, such as NTSC and PAL, in which each full frame of video actually consists of alternating lines taken from two separate fields captured at slightly different times. The two fields are then interlaced or interleaved into the alternating odd and even lines of the full video frame. When displayed on television equipment, the alternating fields are displayed in sequence, depending on the field dominance of the source material.

IP Camera (aka: Internet Protocol Camera)

A digital camera that can both send and receive data via the Internet or computer network. These cameras are designed to support a limited number of users that could connect directly to the camera to view. They are RTSP (Real Time Streaming Protocol) based, and for that reason are not largely supported by broadcasting platforms without using special encoders.

Jump Cut

A jarring transition from scene to scene, most often related to something that should have appeared sequential. For example, a man can be video taped walking from left to right but suddenly jumps in the frame to advance to a position that wasn’t witnessed them walking. Can be used artistically, but also has the reputation for being a sign of a less polished production.

Keyframe (aka: i-frame, Intra Frame)

This is the full frame of the image in a video. Subsequent frames only contain the information that has changed between frames. This process is done to compress the video content. Read more on keyframes and video compression.

Key Frame Interval (aka: Keyframe Interval)

Set inside the encoder or when the video is being encoded, the key frame interval controls how often a keyframe is created in the video. The keyframe is a full frame of the image. Other frames will generally only contain the information that has changed.

Live Streaming

Relates to media content being delivered live over the Internet. The process involves a source (video camera, screen captured content, etc), an encoder to digitize the feed (Teradek VidiU, Telestream Wirecast, etc), and a platform such as Ustream or another provider that will typically take the feed and publish it over a CDN (Content Delivery Network). Content that is live streamed will typically have a delay in a magnitude of seconds compared to the source.

Lossless Compression

Lossless encoding is any compression scheme, especially for audio and video data, that uses a nondestructive method that retains all the original information. Consequently, lossless compression does not degrade sound or video quality meaning the original data could be completely reconstructed from the compressed data.

Lossy Compression

Lossy encoding is any compression scheme, especially for audio and video data, that removes some of the original information in order to significantly reduce the size of the compressed data. Lossy image and audio compression schemes such as JPEG and MP3 try to eliminate information in subtle ways so that the change is barely perceptible, and sound or video quality is not seriously degraded.

MPEG-DASH (aka: Dynamic Adaptive Streaming over HTTP)

An adaptive bitrate streaming technology. Contains both the encoded audio and video streams along with manifest files that identify the streams. This process involves breaking down the video stream into small HTTP sequence files. These files allow the content to be switched from one state to another.

MPEG-TS (aka: Transport Stream, MTS, TS)

A container format that hosts packetized elementary streams for transmitting MPEG video muxed with other streams. It can also have separate streams for video, audio and closed captions. It’s commonly used for digital television and streaming across networks, including the internet.

Optical Zoom

Depends upon the lens’ ability to change the focal length, attempting to create an image that is either closer or further away from the subject. This is often achieved through extending the lens, making it actually physically closer to the subject as well, although it’s really shifting the internal ratio of lens to achieve this effect. This is in contrast to digital zooming, which simulates zooming in by cropping the image to achieve this effect.

P-frames (aka: Predictive Frames, Predicted Frames)

The p-frame follows another frame and only contain part of the image in a video. P-frames look backwards to a previous p-frame or keyframe for redundancies.

Program Stream (aka: PS)

These streams are optimized for efficient storage. They contain elementary streams without an error detection or correction process. It assumes the decoder has access to the entire stream for synchronization purposes. Consequently, programs streams are often found in physical media formats, such as DVDs or Blu-rays.

Progressive Video

A video track that consists of complete frames without interlaced fields. Each individual frame is a coherent image at a single moment in time. This means a video could be paused and the entire image could be seen. All streaming files are progressive, and this should not to be confused with the process of keyframes and p or b frames.

Reverse Telecine (aka: Inverse Telecine, IVTC)

This is a process used to reverse the effect of 3 : 2 pull down. This is achieved through removing the extra fields that were inserted to stretch 24 frame per second film to 29.97 frames per second interlaced video.

RTMP (aka: Real Time Messaging Protocol)

Is a TCP-based protocol that allows for low-latency communication. In the context of video, it allows for delivering live and on demand media content that can be viewed over Adobe Flash applications, although the source can be modified for other playback methods.

RTP (aka: Real Time Transport Protocol)

A network protocol designed to deliver video and audio content over IP networks and runs on top of UDP. The components of RTP include a sequence number, a payload identification, frame indication, source identification, and intramedia synchronization.

RTSP (aka: Real Time Streaming Protocol)

A method for streaming video content through controlling media sessions between end points. This protocol uses port 554. Using this method, data is often sent via RTP. RTSP is a common technology found in IP cameras. However, some encoders, like Wirecast, can actually take the IP camera feed and deliver it in an RTMP format.

Silverlight

Microsoft’s Silverlight is both a video playback solution and an authoring environment. The user interface and description language is Extensible Application Markup Language (XAML). The technology is natively compatible with the Windows Media format.

Smooth Streaming (aka: IIS)

Microsoft’s Smooth Streaming for Silverlight is an adaptive bitrate technology. It’s a hybrid media delivery method that is based on HTTP progressive download. The downloads are sent in a series of small video chunks. Like other adaptive technology, Smooth Streaming offers multiple encoded bitrates of the same content that can then be served to a viewer based on their setup.

SSO (aka: Single Sign-On)

A shared session and user authentication service. It permits users to to use the same login credentials, such as the same username/email and password, across multiple applications. Identity management services based around SSO include Okta, OneLogin, Google Apps for Work and more. In reference to video, this technology is often used to create a secure, internal video solution for enterprises.

Streaming Video (aka: Streaming Media)

Refers to video and/or audio content that can be played directly over the Internet. Unlike progressive download, an alternative method, the content does not need to be downloaded onto the device first in order to be viewed or heard. It allows for the end user to begin watching as additional content is constantly being transmitted to them.

Transcoding

The process of transcoding involves converting one digital video type into another format. This is often done to make a file compatible over a particular service. This process is different from encoding as transcoding involves converting a format that is already digital while encoding relates to converting an analog source to a digital format. Despite this, the terms are often used interchangeably.

Transrating

Involves changing a video source from one bitrate to a different one. This process is often done to accommodate adaptive bitrate technologies, generating lower quality bitrates.

UDP (aka: User Datagram Protocol)

The most universal way to transmit or receive audio or video via a network card or modem. In terms of real-time protocol, RTMP (Real Time Messaging Protocol) is based on TCP (Transmission Control Protocol), which led to the creation of RTMFP (Real Time Media Flow Protocol) that is based on UDP.

Video Compression

This process uses codecs to present video content in a less resource intensive format. Due to the high data rate of uncompressed video, most video content is compressed. Compression techniques can feature overt processes such as image compression or sophisticated techniques such as inter frame, which will look for redundancies between different frames in the video and only present changes via delta frames from a keyframe point.

Video Encoding

A process to reduce the size of video data, often times with audio data included, through the use of a compression scheme. This compression can be for the purpose of storage, known as program stream (PS), or for the purpose of transmission, known as transport stream (TS).

Video Scaling (aka: Trans-sizing)

A process to either reduce or enlarge an image or video sequence by squeezing or stretching the entire image to a smaller or larger image resolution. While this sometimes can just involve a resolution change, it can also involve changing the aspect ratio, like converting a 4:3 image to a “widescreen” 16:9 image.

VOD (aka: Video On Demand)

VOD refers to content that can be viewed on demand by an end user. The term is commonly used to differentiate between live content, as VODs are previously recorded. That said, content can be presented in a way that is not on demand but using previously recorded content, such as televised programming that does not give the end user control over what they are trying to watch.

 


 

Please visit our Support Center or Contact Sales for Ustream compatibility questions regarding the terms found in this video glossary.

Live at IBM Interconnect 2016

Posted on by

IBM Interconnect is the premiere event to learn how to get the most out of your existing investments with hands-on training in cloud and mobile solutions built for security, powered by cognitive, and equipped with advanced analytics. And now that Ustream is a part of the Cloud Video Services unit, we had the privilege of being a part of the event.

In addition to having the opportunity to meet our new IBM friends & family face to face, we were also excited for the chance to get hands on expierence and an insiders look into some of the amazing technology that IBM is a part of. From dancing robots to a BB8 droid that you can control with your mind, the future of IBM is evolving. We may be a bit biased, but the biggest stars of the show were located in the Cloud Video Services unit, specifically the folks at Clearleap, Aspara, Cleversafe and of course Ustream.

We were also honored to have the opportunity to hit the main stage at the Cloud/Mobile expo Theatre, where our VP of Product, Alden Fertig addressed the community and discussed how video has become a global medium for communication for entertainment, information and applications with his presentation: “Video Has Become a ‘First Class’ Data Type in Enterprise”.

Thank you for joining us at IBM Interconnect 2016, and we look forward to seeing you again next year! In the meantime, reach out to one of our sales representatives to learn more about how the IBM Video Cloud can help you and your business make the big leap into the future.

CONTACT US

The New Viewing Experience is Here

Posted on by

We’re happy to announce that the new Viewing Experience that we’ve announced in December is now publicly available for all broadcasters!

The new channel design comes with lots of benefits:

  • Responsive layout looking great on all screen sizes
  • Large player and more room for the chat
  • Large video gallery
  • Interactive Chat & Social Stream
  • Description now available for VODs
  • Less ads on the page and a lot more room for your content.

Customization options include:

  • Cover image
  • About section with rich text formatting, including images and links
  • Links to external websites (Facebook, Twitter, Paypal etc)
  • Links to your other channels on Ustream

To see the new look in action, check out these channels that are already on the new design:

A great thing about the new design is that all customization will carry over to mobile. The iOS and Android app updates will be released in a few weeks.

From now on all new channels get the new design by default. If you have an existing channel you can choose to migrate to the new design manually until April 2nd, 2016 when the old channel design will be discontinued.

Visit your Dashboard to see what’s new and don’t forget to leave us feedback!

Westminster Kennel Club Dog Show 2016

Posted on by

Westminster Kennel Club Dog Show

Connecting people through the power of live video technology is something we take very seriously here at Ustream. After all, by 2017 80% of all the world data will be video! But there is only one thing that we feel even more passionate about, and that is our love for dogs. At any given moment there are no less than 2 canines lounging around the Ustream office, and each one is treated as an equal member of the team. Even as we were in discussions with IBM, one thing was very clear. The dogs stay. Which is probably why the Westminster Kennel Club Dog Show is truly the must watch event of the year. More than 2,000 dogs are vying for the ultimate title of best in show, and we all have our favorites.

The festivities kicked off Monday 2/15 at Madison Square Garden in New York City with the winners of the herding, non-sporting, toy and hound groups selected. Tonight we will learn who is the top dog in the sporting, working and terrier groups, leading up to the highly anticipated Best in Show announcement! And since an event this size entails far more action than can be captured on a single channel, the Westminster Kennel Club has dedicated 8 Ustream channels to bring you live action from every single ring.

Tune in LIVE tonight at 5:00 PST to learn who takes home Best in Show! 

A Brief History of Streaming Video

Posted on by

Video is everywhere, and by 2017 80% of all the world data will be video. It feels like the world suddenly discovered live streaming, which is something we’ve been doing here at Ustream since 2007. In fact, don’t tell Periscope or Meercat, but live streaming on mobile devices isn’t even new. We introduced our first broadcast-capable apps for iOS and Android all the way back in 2009 (really!) and since then we’ve helped Fortune 500 companies launch new products, broadcast concerts from famous recording artists, help citizen journalists document events, and family members share special moment via live, on-the-scenes broadcasts.

Now don’t get us wrong … we’re very happy that live streaming is getting a lot of buzz and that everyone decided to join the party. We’d like to take a step back and show you a brief history of streaming video and how we got to where we are now!

A Brief History of Streaming Video

 

Live Demo: Getting Started with Ustream

Posted on by

getting_started_with_ustream

No matter what size your company is, it’s a no brainer that integrating live and recorded video into your communication strategy is the key to success. But where do you start? Luckily our technical Sales Engineer, Adam Pastana, is here to save the day and will walk you through getting started with Ustream, and cover the basics on how to quickly get up and running with the Ustream platform. This live demo will include:

  • Uploading and managing videos
  • Creating live streams
  • Scheduling events
  • Embedding the video player on your site
  • Understanding your viewership analytics

Leveraging Ustream as your all in one video solution is easier than you think! Join us live on Thursday, February 11th, 2016 @ 11am PST | 2pm EST

REGISTER NOW

Ustream Align: Your Channel Page has a New Design

Posted on by

align-ve-blog-image

We all know that the power of Ustream Align can help you streamline your internal communications while maintaining control of your sensitive data. So its no wonder that companies like Lyft, Slack & Pinterest turn to Ustream’s technology to make sure their entire company, both in house and remotely, stay connected and have access to the resources they need to be successful. It’s the obvious choice!

That’s why we revamped the Ustream Align channel page to make the experience for your viewers the best it can be, including responsive layout for high resolutions screens, easy switching between live and recorded videos and more information about the content they are watching.

Visit your Ustream Align channel page to check out the new look and let us know what you think. Stay tuned, more exciting features for your Ustream Align channels are coming soon!

 

Allstate Unleashes Mayhem at the SEC Championship Game

Posted on by

Allstate claims that their insurance can protect you from “Mayhem” when it strikes, but what happens when the mayhem is controlled by your biggest football rival?

During the weeks leading up to the SEC Championship game, Allstate invited Crimson Tide and Gators fans to inflict Mayhem on their rivals with assistance from two conveyor belts, one massive shredder and a full tailgate set up complete with a charcoal BBQ, patio chairs, and an entire car. University of Alabama and the University of Florida fans battled it out on Twitter by voting for Mayhem to inflict destruction on their rivals by using the hashtag #ShredAlabama or #ShredFlorida and watched live on Ustream as the carnage unfolded.

“Mayhem continues to be an iconic figure, reminding college football fans to protect themselves from the uncertainties of game day mishaps,” said Pam Hollander, vice president of marketing for Allstate. “And while college football fans can’t control the outcome on the field, we’re excited to give them the opportunity to have an impact off the field and show their team passion leading up to one of the biggest college football match-ups of the year.”

In a partnership with long-time advertising agency partner, Leo Burnett, fans of both teams had the chance to watch the chaos live on Ustream in addition to joining the action on Twitter. With the exception of Saturday’s championship game, the #MayhemTweetOff featured no outside advertising, relying instead on social media, live stream broadcast and the participating schools to get the word out. Do you have an upcoming event that you would like to promote? Now is the time to call!

CONTACT US