Category Archives: Ustream

Ustream’s eCDN: Enterprise Content Delivery Network

Posted on by

Wondering what an enterprise content delivery network (eCDN) is and why a company might need one?

Ustream’s eCDN helps enterprises service large viewerships centrally located inside an office. The solution allows companies to horizontally scale their efforts, enabling training sessions and more that can be streamed while not causing bottlenecks for a local network.

ecdnRead on to learn what CDNs (Content Delivery Networks) are before diving into what an enterprise CDN is, and finally to learn more about Ustream’s approach.

 

Understanding The Need For CDNs

CDNs are based around delivering content to audiences that are traditionally difficult for a small server network to manage. This can include audiences that are geographically diverse. Content distributed over the net is dependent in part on the physical distance between a server delivering that content and the source trying to receive it. So, all other factors the same, if the delivering server was located in the United States, it would take someone in Australia longer to receive the content as opposed to someone located locally in the United States. CDNs address this need by locating servers, called edge servers, in diverse locations, often on a global scale.

Beyond geographic diversity, CDNs also manage the need to deliver content to large audience sizes. Content caching involves caching a local copy of content that can be served to other viewers in a region. So if a stream is going viral, viewers in a concentrated area can all be pulling from the same edge server that has the content cached there.

 

What Is A Enterprise Content Delivery Network?

Enterprises, even those utilizing top of the line CDNs, can run into their own issues. 100 employees all trying to watch the same high definition (HD) training video at the same time can easily cause local congestion issues.

An enterprise content delivery network, often abbreviated as eCDN, is intended to resolve issues on the local level while being utilized by a popular SaaS (Software as a Service) offering. This is achieved through a local installation of what basically becomes an edge server located directly inside the corporate network, allowing organizations to horizontally scale resource intensive content like video streaming.

ecdn diagram

 

Introducing Ustream’s eCDN

Recently announced by IBM, Ustream’s eCDN functions like an edge server installed inside an office. It can be delivered as a standalone hardware appliance or as a virtual appliance that is run on virtualization platforms. It ultimately reduces local network strain through delivering a single instance of a video asset.

The offering can work in a WAN (Wide Area Network) environment and also behind corporate firewalls as well, only needing port 80 and port 443 to be open which are the ports for HTTP and HTTPS. A web based administration portal is provided as well, enabling administrators to see concurrent users and a health check on nodes within the eCDN. Staying true to the SaaS nature of Ustream, the enterprise CDN is also updated and optimized by Ustream.

 

Global Reach From eCDN

Multinational companies, or even just multi-office corporations, can also utilize eCDN as well. At the bare minimum, there is an option to utilize fallback logic around eCDN. So if an issue occurs around the eCDN, like the power going out at the location, it can utilize Ustream’s SD-CDN (Software Defined Content Delivery Network) approach that includes multiple CDNs. Viewers outside of the range of the eCDN, for example someone working from home, can still get access to the video asset as well.

For a more robust solution, the enterprise content delivery network can also support multiple instances. Consequently, companies with several offices can utilize eCDN at each location so that a particular office doesn’t have to be prioritized over others. The  solution allows companies to effectively scale their video streaming, regardless of internal scale, to deliver even high definition content to a corporate audience and with faster response times.

 

Ustream eCDN Release

The new offering includes full functionality for live streaming content. This includes the ability to reach mobile devices within the eCDN, through delivering a cloud transcoded version of the stream in HLS that supports iPhones, iPads, Android devices and more. Additional features, such as delivering cloud transcoded bitrates that are used for the adaptive bitrate process, also integrate smoothly with eCDN.

Want to learn more about eCDN and how it could work to resolve internal congestions on your enterprise’s network?

Contact Us Today

Customize Channel Page: New Viewing Experience

Posted on by

Today, April 14th, marks the complete launch of the new Ustream viewing experience and the discontinuation of the old experience. This impacts all existing Ustream channel pages that were created before February 18th, 2016, and were not already migrated to the new design.

The new viewer experience brings with it a multitude of new improvements for the desktop and mobile experience around these pages. This includes a cleaner look and also additional features for broadcasters to customize channel page content.

Expanded Channel Page Options

Customize Channel Page MenuThe new viewing experience adds numerous ways to customize a channel page. It offers broadcasters more opportunities to relay their message and provide complementary resources surrounding their broadcasts. Many of these new features are controlled through an expanded Channel Page menu found in the dashboard of a Ustream account.

Among these features are the ability to update the cover image, about text, links, video gallery (“show recorded videos”) and setting for the “other channels” feature.

We’ll dive into each new feature in terms of the functionality that can be expected and sometimes how to best use it.

 

Cleaner Look + Cover Image

The new viewing experience offers a much more streamlined and clean looking interface. This includes a similar experience across both desktops and mobile phones or tablets, in contrast to the different design right now across these. The new look utilizes a lot of white in the display, rather than darker, competing background colors. This places greater emphasis on the unique channel page content, such as the video player and information related to the new features that broadcasters can control.

Customize Channel Page Cover Image

One of these major new additions is the cover image for channels. This appears near the top of the channel page, and will function as a backdrop, although with the channel image appearing elevated and on top of the cover image to draw greater attention to it. On mobile the cover image and the channel image appear around the middle of the screen to make room for the player which is at the top of the screen as usual in video apps.

Right now, uploaded images must be 2560 x 852 pixels to best accommodate the varying devices with their own resolution. No other image sizes are currently permitted. When you go to the cover image settings you can find the specific safe area within these dimensions that is guaranteed to appear on all screen sizes.

 

About: Description Updated

Customize Channel Page DescriptionThe concept of a description or “about” area for channel pages is nothing new. What has changed, though, is the degree of functionality in this area. This includes rich text formatting that introduces the ability to bold, italicize, and use headings. Hyperlinks can also be added, which requires highlighting text and then hitting the link button. Finally, there are also options to add graphics, which have to be hosted outside of Ustream, and to add horizontal lines. You can also use numbered and unnumbered lists.

There is also an added option to control it so the “about” area is the default choice when the viewer goes to the channel page, as opposed to the video gallery area.

 

The “links” feature found under the “channel page” dashboard allows broadcasters to define URLs that can be linked to. This offers the broadcaster the chance to enter the URL and link text for the name of the link. These links can be edited or deleted at any time.

Other Links for Channel Pages

Once enabled, the links appear in the “about” area of the viewing experience. An icon will also populate to the left of each link. The icon graphic will be a link chain, with exceptions including Facebook, Twitter, Instagram, and YouTube pages which have their own logos.

Links appear before content entered into the “about” option in the channel page area. Links will be ordered, from left to right, based on when they were added to the “links” area. If the broadcaster needs to add a new link that should appear at the front, it’s recommended to simply edit the first link and replace it with the new URL. The replaced link could then be added as a new link, and would appear at the bottom.

 

Broadcasters can add a video gallery to their channel page that will feature their uploaded VOD (video on demand) content. The gallery will show a thumbnail to represent the content, title of the video, how long ago it was uploaded, and total views the video asset has received.

Control wise, this feature is found under the channel page that is not set to “unpublished” where it can be enabled or disabled. Be aware that when used, all video content associated with the channel will be listed here. As previously mentioned, the video gallery is the default landing area for the viewer, unless an option is enabled to make the “about” area the default.

 

Upcoming Events

Events on Ustream give broadcasters a powerful way to inform their viewership of upcoming broadcasts. Not every broadcaster does 24/7 streaming, and letting viewers know when they can expect the next live stream can bolster both your audience size and also viewer retention for future streaming.

Upcoming Events

This feature adds an “upcoming” area to the channel page that viewers can select to see a list of events. This provides the dates they will occur on, time of day, title of the event and an option to “remind me” of when the event will occur. The remind feature can be linked either through a Facebook account or through a Ustream account to send notifications to the viewer.

As a side note, there is no “on/off” switch for this feature as unlike others it’s not controlled from the “channel page” settings. So if events are scheduled, the “upcoming” area will populate with them.

 

Other Channels

One of the goals of this update is to help broadcasters better own their Ustream viewing experience. One approach toward achieving this is through the introduction of the “other channels” feature.

Parallel to the “about” and “video gallery”, viewers can select the “other channels” option to see a list of other channels from that broadcaster.

For the broadcaster, this feature is completely customizable. It offers three settings which are:

  • List all of my channels
  • List selected channels
  • List no channels (disable feature)

Being able to select channels for this area is fairly robust. It’s controlled through entering the Ustream channel page URL. Because of this, users can even enter in channels from other broadcasters as well. This can allow synergy, from a simple “channel exchange” with others to being able to list channels from other departments under a single company.

 

Story Behind The New Viewing Experience

The current viewing experience has been around for years, with no significant upgrades to the design. This resulted in the experience not evolving with industry trends and not providing the experience desired at Ustream to best utilize features such as the ability to create galleries of video content.

Once deciding that the experience would be drastically improved, we established requirements that we wanted the new design to address. This included:

  • Clean design
  • Large player
  • Simple way to chat
  • Video gallery
  • One primary customization option
  • Consistency across platforms

Updating the experience presented its own challenges. Not only did the new design have to work over desktop and mobiles, but it also had to accommodate the over 1 million customers utilizing the portal as well. Consequently, updating the design was broken into stages.

Stage one started on December 16th, 2015, and offered users the chance to preview the design using their own content, barring it wasn’t password protected or had the channel page disabled. This offered a valuable opportunity for users to present feedback before committing to elements of the redesign.

Stage two started a slow rollout that began on February 18th, 2016. This allowed broadcasters the opportunity to migrate early to the new design and also marked the point when all newly created channel pages would automatically be with the new viewing experience.

 

Complete Launch

Stage three is today, and sees all channel pages converted over to the new design. Many months in the making, the new viewing experience is now available on all Ustream channel pages, offering a clean, unified look that introduces more features for broadcasters to utilize.

If you want to learn more about the new viewing experience and how to control each new feature, read this resources guide.

Want to give the new viewing experience a test drive? Try out Pro Broadcasting and start using these new features on large scale video streams.

Get Started

Viva Las Vegas! Come see Ustream at NAB 2016

Posted on by

viva las vegas

We’re dusting off our blue suede shoes, packing our bags and are ready to hit the road for the NAB 2016 Show in Las Vegas! Will you be joining us? This will be our 5th appearance at the conference, and trust me when I say that it gets better every year! Especially since we are now a part of the IBM family, this time around is already turning out to be one for the ages, we cannot wait to show you what we have in store.

Members of the Ustream team will be onsite and are looking forward to meeting you at the Ustream booth #SU11413. Book a meeting to speak with us, or stop by during our happy hour on Wednesday April 2oth between 5:00p until when the expo floor closes, and be one of first 100 visitors to receive a custom etched “Ustream at NAB 2016” commemorative beer pint or wine glass that you can take home and cherish for years to come. We will also be available at the IBM Video Cloud booth #SL3305 to answer any questions you may have about Ustream’s powerful video technology and how our services fit into the IBM Video Cloud unit. While you are there, make sure to catch one of our daily Ustream theater presentations at 4/18 @ 5:30p, 4/19 @ 1:30p, 4/20 @ 9:30a & 4/21 @9:30a.

Ustream + IBM are planning a lot of great sessions, but there is more to see at NAB. Check out some of the sessions that we are excited for that give you a sneak peek into the direction that the broadcasting industry is headed. Here are some of the talks we are looking forward to at NAB this year that the live streaming community shouldn’t miss:

Broadcast Minds™: Where Today’s Content Leaders Discuss Tomorrow’s Trends
Monday, April 18 | 4:30p – 6:00p

  • This session is presented by NewTek, manufactures of the TriCaster encoder, who are bringing together IP leaders in order to discuss where the broadcast industry might be headed in the future. The segment will include looking at today’s IT technology before looking toward where live production might shift in the years to come.

Cisco Presents – Media in a World of Exponential Technology Advances
Tuesday, April 19 | 12:30p – 1:30p

  • Produced in sponsorship with Cisco, this panel attempts to predict disruptions that will occur to transform the media landscape, similar to the drastic change that occurred to the camera market where consumer friendly video cameras continue to improve in both quality and decrease in size. This panel is a bit more media focused, but there is a high probability the discussion will be relevant to broadcasters outside of this industry.

4K, UHD, HDR and More – The Future of Video
Tuesday, April 19 | 2:30p – 3:30p

  • Sponsored by Ericsson, this super session looks at how the current ramp up toward improved video quality is shaking out at the consumer level. It then dives into a panel featuring people from production houses, agencies, broadcasting, and manufacturing who forecast where video is headed in the future.

We look forward to seeing you there, and in the meantime VIVA LAS VEGAS!

MEET WITH USTREAM AT NAB

Slack Video Integration for Live Notifications

Posted on by

Slack Video Integration for Live NotficationsUsing Slack for its collaborative capabilities? Looking for ways to bolster that effort through autonomous notifications of your streaming projects? Ustream is introducing a new Slack video integration that surrounding Ustream video channels. The integration allows broadcasters to link to a Slack channel to push automatic notifications, and works with both Ustream Align for internal communication and Pro Broadcasting. It creates an easy way for team members and followers to get informed about the latest live stream or video on demand content.

What is Slack?

Slack functions as a powerful and engaging tool to facilitate communication, largely for internal use cases. In 2014, a year after the “app” launched, engagement was tracked on an average of 10 hours per user. The service helps to keep people connected through being available on desktops and mobile devices, with apps available for Android, iOS, and even a beta version for Windows phones.

Slack Notification Use Cases

The Slack integration feature for Ustream can be enabled on “public” channels over the messaging platform. This feature is one of the few Connection options that works with Ustream Align. The use cases for this implementation are numerous, for example a general company channel can be linked in order to notify the team of important streams like a CEO town hall meeting. The notifications can also inform channel members of training sessions beginning or when training resources are updated as well.

For event use, a marketing team can be quickly notified when a broadcast for an event is live. The notification can be used as a sync point to notify others to begin increased social media efforts. It can also supplement webinars and notify a team to key up internal resources if the stream is expected to bring in live enquiries.

For public use, there are many communities open that allow for relevant conversation around a topic. For example, looking to live stream a construction site for a new skyscraper? Chances are a Slack community exists around that interest and can be easily notified and engaged whenever the broadcast goes live. Although Slack is generally associated with internal use, public facing channels do thrive with passionate participants. There are a lot of directories available to catalogue the many public channels available.

Setting Up Slack Integration

Setting up Slack video integration inside UstreamLinking a Ustream channel with a Slack channel is a quick and easy process.

While logged into a team on Slack, a broadcaster needs to login to their Ustream account and go to the Connections tab under Account. One of the connection options on this tab will be for Slack. Clicking the connect button will redirect to an authorization page, where authorize can be clicked to redirect back to the Connections tab.

The Slack team account has now been integrated with the Ustream account. The integration can be disabled at any time by clicking the Disconnect button.

Check our Slack Integration Set Up Guide for a more detailed explanation of enabling Slack integration, with images for each step of the process.

Enabling Slack Notifications

Enabling Slack notifications for the Ustream video platformAfter being setup, a new tab will be added to Ustream channels on the account called Slack Notifications. Selecting this will allow a broadcaster to enable the feature and also designate what Slack channel the notifications will be sent to. Note that the dropdown for Slack channels will include all Slack channels connected to your team, regardless of if the individual who set up the Slack integration is apart of that channel or not. “Private” channels are not available on the list.

Once enabled, the Slack integration will offer notifications for participants in the Slack channel. These notifications will be whenever a Ustream channel goes live or new video content is added. In the instance of either of these events, a message will be pushed to the corresponding Slack channel.  For a stream going live, this message will appear like the following:

Ustream BOT
Live now

[Channel title]
[Channel description]

Both of the elements in brackets are controllably by the broadcaster. The “Channel Title” can be set in the Info tab. The “Channel Description” can be set as as part of the Channel Page tab, under the About settings. The description will truncate if the description is longer than 140 characters, ending in an ellipses (…).

The channel title will act as a link, leading back to the Ustream channel page. If the Ustream channel page is disabled, the link will be removed although the title name will still appear.

For videos content being added to a Ustream channel, a similar but different message will appear on the Slack channel to notify users.

Ustream BOT
New video on [Channel Title]

[Video title]
[Video description]

The bracket information can be customized by the Ustream broadcaster. This is controlled through going to the Videos tab for the channel and editing the video. The edit panel will allow the video title and description to be edited.

A single Slack channel can be linked to a Ustream channel at a time. However, a broadcaster can quickly change their Slack channel in the Connection tab at any time.

Note: The message to inform a Slack channel that a stream is going live is not linked to the Event feature, where a broadcaster or company can set a specific date and time that an event will begin. This message will instead publish whenever live video content is published from the encoder to that channel. As a result, this makes it faster to setup a broadcast, but users should be aware in regards to test streams and will want to disable the Slack notification when doing a test live stream for this reason.

Slack Integration Feature History and Launch

The Slack Integration feature was actually born as a December 2015 hackathon idea.

The feature is launching today on April 7th, 2016. It will be available on all plan levels at Ustream. This includes Align, being the second Connection feature there after YouTube.

Want to learn more about Ustream Align and how this Slack video integration can help bolster your internal communication?

Contact Us Now

Interlaced Video & Deinterlacing for Streaming

Posted on by

Interlaced Video and deinterlacing with encodersHave you ever seen video content that looks like the image to the right, but weren’t sure of the cause? These overt horizontal lines, appearing as pixelation around movement like out of an old school Atari game, are an artifact created from presenting an interlaced source in a progressive format.

This article explains what is interlaced video content and what sources, such as analogue cameras, can produce this type of video content on live streams. It then goes over deinterlacing techniques to remove this artifact and how to easily enable it on the encoder side… and why you wouldn’t want to use deinterlacing on content that is already progressive.

What Is Interlaced Video?

Interlaced video is a technique that was originally created and made popular before the advent of digital televised content. First developed over 70 years ago, it was primarily for television video formats like NTSC and PAL.

At its root, interlacing was an early form of video compression that was used to make the video look smoother while sending less data. This was achieved through breaking up each full frame of video into alternating lines taken from two separate fields that were captured at slightly different times. After this process, one set of lines would be delivered to the viewer before 1/60th of a second later the second set of lines would be sent.

In contrast to other possible methods of the time, this process granted what appeared to be smooth movement, at least to the human eye, while being able to send less data related to the broadcast. Interlacing can cause issues, though, trying to deliver that feed to a progressive source due to the differences in presentation between the two.

 

Progressive Video And How It Differs From Interlaced Video

Unlike interlaced content, progressive video is a video track that consists of complete frames. There is a slight asterix to this statement as techniques like interframe can be used to compress video content to remove redundancies from frame to frame (read more about the interframe process). Even including this technique, progressive video content will not alternate fields and will present a full keyframe that you will never find in interlaced content. This means it won’t serve odd or even lines at different time intervals from each other.

Consumers will be familiar with this terminology due to its proliferation in HD content. For example, 1080p content means it has a vertical resolution of 1080 lines while the “p” relates that this is progressive content.

 

Which Method Is Better: Progressive Or Interlacing?

To be blunt, the answer is it actually doesn’t matter which is better. Many playback methods, like computer monitors or modern HD TVs, do not support interlacing. So even if interlacing provided better looking content, a broadcaster would still want to go with progressive delivery due to support for this method. Otherwise, the broadcaster would be displaying interlaced video in a progressive format.

Assuming both methods were supported equally, the human eye can’t keep up and the motion should look smooth regardless.

 

What It Looks Like: Interlaced Content As Progressive Video

Sometimes a broadcaster needs to use an interlaced source for streaming. In other words, taking an interlaced source and make it progressive or watching it in a progressive medium, like a computer monitor. This need can range from wanting to use an older broadcast to using an analogue camera that supports interlacing.

Converting the video involves combining the two fields, that were created as part of the interlacing process, into a single frame. By default, this process creates a rather ugly artifact on high motion in the video track. The motion between fields can cause visible tearing when displayed as progressive video. Essentially, the video track shows two different line fields where the fast motion is occurring, creating a staggered line appearance as seen in the image below on the figure to the left.

interlace-soccer2

Left: Interlaced video shown in a progressive format. Right: Deinterlaced video (more on this later).

 

How To Tell If Your Camera Captures Interlaced Video

A lot of this article has talked about interlacing as a legacy component, but that doesn’t give a fair representation. A lot of analogue cameras, for example, are setup to deliver video in an interlaced manner. Even some modern digital cameras still offer interlaced mode. Reasoning behind this is partially for compatibility and also 1080, even 1080i, is a strong selling point and it’s cheaper to do 1080i. Because of that, even though interlacing might be sometimes associated with older, televised broadcasts, it’s still very possible to use an analogue camera with a capture card or another setup and still run into interlacing.

One method to tell if your camera was setup for interlaced content or not is in the specs. While some will be overt, describing that the camera outputs in interlaced mode, others will state it in their mentioned resolution. For example, we already discussed that 1080p is an HD feed that is progressive. If that stated 1080i, though, it would mean it’s HD, interlaced content. Chances are good that someone has seen 1080p content much more frequently than the interlaced version. Most modern analogue cameras, if they are interlaced, should mention it either directly or with the resolution. If it’s an older analogue camera, from before 2003, it outputs interlaced content as the first consumer-affordable progressive camera was the Panasonic AG-DVX100 that was released in 2002.

 

What Is Deinterlacing Video: When You Have To Use Interlaced Sources

Thankfully, there is a process called deinterlacing which can solve issues created from presenting interlaced content in a progressive medium. Deinterlacing uses every other line from one field and interpolates new in-between lines without tearing, applying an algorithm to minimize the resulting artifacts.

 

How To Deinterlace Video For Live Streaming

Deinterlacing is done at the encoder level for live content. How this is done varies from encoder to encoder, with some enabling it through a simple check box.

Deinterlacing video content at the encoder levelFor Ustream Producer, deinterlacing is found under source settings, via Sources and then “Show Source Settings…”. If a source is being used that can be deinterlaced, a checkbox will appear to enable it for that source.

Adobe Flash Media Live Encoder (FMLE) users can find the deinterlace option on the main Encoding Options panel. Simply called “Deinterlace”, this feature is found to the left of Timecode at the bottom of the available options.

Teradek encoder products, such as the Cube and VidiU, offer built-in hardware based deinterlacing. Inside the interface for the encoder, this feature is found under Encoder Settings. Located above Adaptive Framerate, this feature is called simply Deinterlacer and can be enabled or disabled.

Show Source Settings to Deinterlace in WirecastOn Wirecast, this is found under Sources and then “Show Source Settings…”. From this screen you can select your source, with most having two options available. For example, a capture card source might show “Capture Device Size” and “Device Deinterlacing”. Changing this from “None” to “Blend” will activate deinterlacing.

If someone is using an older version of Wirecast, this option is instead located under File > Preferences > Advanced instead.

How to deinterlace in vMixFor vMix, the user has to click Add Input in the left corner to open the input selection panel. The options present will depend on the type of source selected. If selecting a source like a camera, an option called Interlaced should be present, located below Frame Rate. Unlike other encoders, to deinterlace content this option needs to be unchecked.

 

Another Source Of Interlaced Video: Three-two Pull Down

Sometimes referred to as a 2:3 pulldown, three-two pull down is a process used to convert material from film to an interlaced NTSC display rate. This involves taking content created at 24 frames per second and converting it to 29.97 frames per second, which is the signal frame rate of NTSC video. This process involves duplicating fields, two from one frame and then three from the next frame or the process can also be vice-a-versa. Consequently, it’s common for this to be called 3:2 pulldown or 2:3 pulldown as well, with the numbers used interchangeably to describe the effect.

 

Reverse Telecine: Removing the 3:2 Pull Down

Also known as inverse telecine (IVTC), reverse telecine is a process that can be used to remove the effects of taking a source and stretching it from 24 frames per second to 29.97 frames per second.  This involves removing the added information from the frames to return it to the 24 frames per second.

For example, frame 1 might be converted into frame 1A and frame 1B through interlacing, with each being a vertical odd or even sequence that is interlaced. However, frame 2 might be converted into frame 2A, frame 2B and frame 2C, with the last one being duplicated content that is used to gradually increase the frame rate. As part of reverse telecine, this added content would be removed to restore the video to its original frame rate.

If you want to live broadcast content that previously had a 3:2 pull down applied to it, it’s recommended to encode it with a reverse telecine process ahead of time before the broadcast. Apple Compressor and Handbrake, the latter calls this process “detelecine”, are two examples of programs that can be used to achieve this.

 

Can Deinterlacing Video Be Bad?

Yes, if the source is not interlaced than the result can introduce needless artifacting if the deinterlacing methods are inadequate. This will be most noticeable on motion, which will have a greater loss of quality. Fine, rounded details can also suffer, often converting a smooth look into a blocky look, like mini stairs as would be common in video games with pixels present and trying to create curves. If the type of deinterlacing being provided is blended, it can show obvious motion in the same frame.

In addition, deinterlacing is more CPU intensive. So an encoder using deinterlacing will require to be on a better unit compared to a similar encoder not using deinterlacing. So from a reliability standpoint, it’s better to not use the feature too.

So if a source is not interlaced, do not provide deinterlacing to it. If someone isn’t sure if a source is interlaced or not, do a quick test broadcast without deinterlacing. After some sort of motion occurs in the feed it should be easy to tell if the source needs to be deinterlaced or not.

If someone is dealing with mixed content, where part of the video is interlaced and other elements are not, it’s up for debate if the entire feed should be deinterlaced or not. Interlaced content displayed in a progressive manner is much more disruptive to the viewing experience compared to artifacts introduced from inadequate deinterlacing on already progressive content. For this reason, I personally recommend to deinterlace when dealing with mixed content. School of thought there can go both ways, though. For example, if the amount of interlaced content is minimal, like briefly showing an older TV playing interlaced content, a broadcaster can get away without using it.

 

Summary: Know Interlacing And How To Correct It

Many modern broadcasters will never experience interlaced content when it comes to their own broadcasting. For example, someone using just a webcam and a software based encoder will never have to worry about this. As setups become more complex, bringing in either professional analogue cameras or legacy equipment/sources (VHS tapes, etc), interlacing might come up and it’s best to know the quick techniques that can be used by your encoder to prevent it.

Interlace video example before deinterlacing

Keyframes, InterFrame & Video Compression

Posted on by

The default mental image of video compression involves unwanted video artifacts, like pixelation and blockiness in the image. This sells short, though, the complexity that actually goes into compressing video content. In particular, it overlooks a fascinating process called interframe, which involves keyframes and delta frames to intelligently compress content in a manner that is intended to go unnoticed.

This article describes this process in detail, while also giving best practices and ideal encoder settings that you can apply to your live streaming at Ustream.

Understanding Video Frames

There are a lot of terms and aspects of streaming technology that can be taken for granted. As someone matures as a broadcaster, it pays to understand elements in greater detail to learn why a process is done and also optimal settings.

For example, a keyframe is something a few broadcasters have seen mentioned before, or saw the setting in an encoder like Wirecast, without quite realizing what it is and how beneficial this process is for streaming. A keyframe is an important element, but really only part of a longer process that helps to reduce the bandwidth required for video. To understand this relation, one first needs to understand video frames.

Starting at a high level, most probably realize that video content is made up of a series of frames. Usually denoted as FPS (frames per second), each frame is a still image that when played in sequence creates a moving picture. So content created that uses a FPS of 30 means there are 30 “still images” that will play for every second of video.

An Opportunity To Compress: InterFrame

On an average video, if someone were to take 90 consecutive frames and spread them out they will see a lot elements that are pretty much identical. For example, if someone is talking while standing next to a motionless plant it’s unlikely that information related to that plant will change. As a result, that’s a lot of wasted bandwidth used just to convey that something hasn’t changed.

Still frame for keyframeConsequently, when looking for effective ways to compress video content, frame management became one of the cornerstone principles. So if that plant in the example is not going to change, why not just keep using the same elements in some of the subsequent frames to reduce space?

This realization gave birth to the idea of interframe prediction. This is a video compression technique that divides frames into macroblocks and then looks for redundancies between blocks. This process works through using keyframes, also known as an i-frame or Intra frame, and delta frames, which only store changes in the image to reduce redundant information. These collection of frames are often referred to by the rather non-technical sounding name of a “group of pictures”, abbreviated as GOP. A video codec, used for encoding or decoding a digital data stream, all have some form of interframe management. H.264, MPEG-2 and MPEG-4 all use a three frame approach that includes: keyframes, p-frames, and b-frames.

What Is A Keyframe?

The keyframe (i-frame) is the full frame of the image in a video. Subsequent frames, the delta frames, only contain the information that has changed. Keyframes will appear multiple times within a stream, depending on how it was created or how it’s being streamed.

If someone were to Google “keyframe”, they are likely to find some results related to animation and video editing. In this instance, we are using the word keyframe in how it relates to video compression and its relationship to delta frames.Keyframe and a P or B frame example

How Do P-frames Work?

Also know as predictive frames or predicted frames, the p-frame follows another frame and only contain part of the image in a video. It is classified as a delta frame for this reason. P-frames look backwards to a previous p-frame or keyframe (i-frame) for redundancies. The amount of image presented in the p-frame depends on the amount of new information contained between frames.

For example, someone talking to the camera in front of a static background will likely only contain information related to their movement. However, someone running across a field as the camera pans will have a great deal more information with each p-frame to match both their movement and the changing background.

What Are B-frames And How Do They Differ From P-frames?

Also known as bi-directional predicted frames, the b-frames follow another frame and only contain part of the image in a video. The amount of image contained in the b-frame depends on the amount of new information between frames.

Unlike p-frames, b-frames can look backward and forward to a previous or later p-frame or keyframe (i-frame) for redundancies. This makes b-frames more efficient as opposed to p-frames, as they are more likely to find redundancies. However, b-frames are not used when the encoding profile is set to baseline inside the encoder. This means the encoder has to be set at an encoding profile above baseline, such as “main” or “high”.

How Do You Set A Keyframe?

In regards to video compression for live streaming, a keyframe is set inside the encoder. This is configured by an option sometimes called a “keyframe interval” inside the encoder.

The keyframe interval controls how often a keyframe (i-frame) is created in the video. The higher the keyframe interval, generally the more compression that is being applied to the content, although that doesn’t mean a noticeable reduction in quality. For an example of how keyframe intervals work, if your interval is set to every 2 seconds, and your frame rate is 30 frames per second, this would mean roughly every 60 frames a keyframe is produced.

The term “keyframe interval” is not universal and most encoders have their own term for this. Adobe Flash Media Live Encoder (FMLE) and vMix, for example, uses the term “keyframe frequency” to describe this process. Other programs and services might call the interval the “GOP size” or “GOP length”, going back to the “Group of Pictures” abbreviation.

Choosing A Keyframe Interval At The Encoder Level

In terms of setting a keyframe interval, it varies from encoder to encoder.

For FMLE, this option, denoted as “Keyframe Frequency”, is found in the software encoder by clicking the wrench icon to the right of format.

In Wirecast, this is set from the Encoder Presets menu and the option is called “key frame every”. Wirecast is different as the interval is actually denoted in frames. So for a 30 FPS broadcast, setting the “key frame every” 60 frames would roughly give a keyframe interval of 2 seconds, as you have 30 frames every second.

For the vMix encoder, one needs to first click the gear icon near streaming, which opens the Streaming Settings. Near the quality option here is another gear icon and clicking this will open up a menu that has the ability to modify the “Keyframe Frequency”.

How to setup keyframe interval in OBS

Setting the keyframe interval in version v0.542b of Open Broadcast Software (OBS)

In Open Broadcast Software (OBS), for versions after v0.55b, the keyframe interval can be set in the Settings area under Advanced. For versions of OBS before v0.542b, it’s not very clear how to modify the keyframe interval, but this is actually a component of Settings. Once there, go to Advanced and then select “Custom x264 Encoder Settings”. In this field one needs to enter in the following string: “keyint=XX” with the XX being the number of frames until a keyframe is triggered. Like Wirecast, if a keyframe interval of 2 seconds is desired and the FPS is 30 seconds enter the following: “keyint=60”.

For XSplit, keyframe interval is a component of the channel properties. Under the Video Encoding area, one will find a listing that says “Keyframe Interval (secs)”. To the far right of this is a gear icon. Clicking the gear will launch a “Video Encoding Setup” popup. This will allow someone to specify the keyframe interval in seconds.

Relationship Between Keyframes And Bitrates

Mileage in this explanation might vary, as encoders do manage bitrates and keyframes differently. Using an encoder like Wirecast, one might notice that broadcasting someone talking against a still background has “higher quality” compared to broadcasting someone jumping up and down against a moving background. This can be reproduced when using the same exact average bitrate and keyframe interval between them. The reason for this is because, in part, due to the fact that the delta frames have a ton of information to share in the jumping example. There is very little redundancy, meaning a lot more data that needs to be conveyed on each delta frame.

If you have an encoder like Wirecast, though, it’s trying its hardest to keep the stream around that average bitrate that was selected. Consequently, the added bandwidth that is needed for the additional information contained in the delta frames results in the quality being reduced to try and keep the average bitrate around the same level.

What’s The Best Setting For A Keyframe Interval?

There has never been an industry standard, although 10 seconds is often mentioned as a good keyframe interval, even though that’s no longer suggested for streaming. The reason it was suggested is because, for a standard 29.97 FPS file, the resulting content is responsive enough to support easy navigation from a preview slider. To explain more, a player can not start playback on a p-frame or b-frame. So using the 10 second example, if someone tried to navigate to a point that was 5 seconds into feed it would actually shift 5 seconds back to the nearest keyframe and begin playback. This was considered a good trade off for smaller bandwidth consumption, although for reference DVDs elected to use something much smaller than 10 seconds.

However, for live streaming, the recommended level has drastically dropped. The reason for this is the advent of adaptive bitrate streaming. For those unfamiliar with adaptive streaming, this technology enables a video player to dynamically change between available resolutions and/or bitates based upon the viewer trying to watch. So someone with a slower download speed will be given a lower bitrate version, if available. Other criteria, like playback window size, will also impact what bitrate is given.

Player displaying a keyframeTrue adaptive streaming doesn’t just make this check when the video content initially loads, though, but can also alter the bitrate based on changes on the viewer’s side. For example, if a viewer was to move out of range of a Wi-Fi network on their mobile, they will start using their normal cellular service which is liable to result in a slower download speed. As a result, the viewer might be trying to watch content that is too high of a bitrate versus their download speed. The adaptive streaming technology should realize this discrepancy and make the switch to a different bitrate.

The keyframe interval comes into action here as making that switch occurs during the next keyframe. So if someone is broadcasting with a 10 second interval, that means it could take up to 10 seconds before the bitrate and resolution might change. That length of time means the content might buffer on the viewer’s side before the change occurs, something that could lead to viewer abandonment.

Because of this, it’s recommended to have your keyframe interval set at 2 seconds for live streaming. This produces a result where the video track can effectively change bitrates often before the user might experience buffering due to a degradation in their download speed.

What’s An IDR-Frame?

We are looping at this point, but it pays to understand p-frames, b-frames and get a crash course in adaptive streaming before talking about what is an IDR-frame, or Instantaneous Decode Refresh frame. These are actually keyframes and each keyframe can either be IDR based or non-IDR based. The difference between the two is that the IDR based keyframe works as a hardstop. An IDR-frame prevents p-frames and b-frames from referencing frames that occurred before the IDR-frame. A non-IDR keyframe will allow those frames to look further back for redundancies.

On paper, a non-IDR keyframe sounds ideal: it can greatly reduce file size by being allowed to look at a much larger sample of frames for redundancies. Unfortunately, a lot of issues arise with navigation and the feature does not play nicely with adaptive streaming. For navigation, let’s say someone starts watching 6 minutes into a stream. That’s going to cause issues as the p-frames and b-frames might be referencing information that was never actually accessed by the viewer. For adaptive streaming, a similar issue can arise if the bitrate and resolution are changed. This is because the new selection might reference data that the viewer watched at a different quality setting and is no longer parallel. For these reasons, it’s always recommended to make keyframes IDR based.

Generally, encoders will either provide the option to turn on or off IDR based keyframes or won’t give the option at all. For those encoders that do not give the option, it’s almost assured to be because the encoder is setup to only use IDR-frames.

Should Someone Use An “Auto” Keyframe Setting?

In short: no.

Auto keyframe settings, in principal, are pretty great. They will manually force a keyframe during a scene change. For example, if you switch from a PowerPoint slide to an image of someone talking in front of a camera that would force a new keyframe. That’s desirable as the delta frames would not have much to work with, unable to find redundancies between the PowerPoint slide and the image from the camera.

Unfortunately, this process does not work with some adaptive streaming technologies, most notably HLS. The HLS process requires the keyframes to be predictable and in sync. Using an “auto” setting will create variable intervals between keyframes. For example, the time between keyframes might be 7 seconds and then later it might be 2 seconds if a scene change occurs quickly.

Setting the Keyframe interval in OBS

Setting a whole number in OBS v0.55b to disable auto switching

For most encoders, to disable “auto change” or “scene change detect” features this often just means denoting a keyframe interval. For example, in OBS if a keyframe interval is set at 0 seconds then the auto feature will kick in. Placing any number in there, like 1 or 2, will disable the auto feature.

If the encoder, like Wirecast, has an option for “keyframe alignment”, it should be known that this is not the same process. Having keyframes aligned is a process for creating specific timestamps and is best suited for keeping multiple bitrates that the broadcaster is sending through the encoder in sync.

Perfecting A Keyframe Strategy

With the advent of adaptive bitrates, the industry is at an odd juncture where there is a pretty clear answer on best practices for keyframes and live streaming. That strategy includes:

  • Setting a keyframe interval at around 2 seconds
  • Disabling any “auto” keyframe features
  • Utilizing IDR based keyframes
  • Using an encoding profile higher than baseline to allow for b-frames

This strategy allows for easy navigation of content, for on demand viewing after a broadcast, while still reaping the benefits of frame management and saving bandwidth on reducing redundancies. It also supports adaptive btirate streaming, an important element of a successful live broadcast and being able to support viewers with slower connections.

Please Contact Sales for more questions on interframe and how Ustream can help you deliver high quality video alongside lower bitrate options through cloud transcoding.

 

Disclaimer: This article is aimed at helping out live broadcasters or at least those who plan for a healthy video on demand strategy over streaming. The answer to many of these questions would of course be different depending on playback method. For example, for the intention of creating video content that might be played via a video file, the “scene change” option is just one example of something that would be ideal. Some of these techniques only becomes undesirable in relation to streaming when using adaptive technology.

The History of Ustream at NAB

Posted on by

ustream at nab

From April 16th through the 21st, the Las Vegas Convention Center will be taken over by 100,000 video professionals and content creators from 150+ countries looking for the chance to get hands-on experience with emerging technologies and the latest innovations in video production and delivery. NAB 2016 is right around the corner, and Ustream has had the privilege of attending the big event 4 years in a row. Let’s take a look back at some of the highlights and the history of Ustream at NAB.

2012
Ustream started our presence at NAB way back in 2012 by providing live coverage for our partners at NewTek, TWiT & Panasonic, and combined all of the action into one super channel that helped viewers keep up on all of the excitement at the show.

2013
The theme of NAB 2013 was the evolution of broadcast media and how social media and consumer engagement are changing the industry landscape. Ustream’s CEO & Founder, Brad Hunstable, had the pleasure of hosting a session about the “Reinvention of Live Media” that went into depth about how Ustream stays ahead of the curve of the new age of real time consumer behavior. We also sponsored the Technology Awards Luncheon, where the National Association of Broadcasters gave recognition to some of the most innovative people in the video community.

2014
In 2014, Teradek broadcasted coverage from NAB and updated online audiences on the latest and greatest announcements from the world’s largest broadcast equipment manufacturers and industry influencers. The live show was streamed exclusively on Ustream for 32 hours over the course of 4 days and offered Spanish captioning for the very first time. Special segments were provided by a variety of partners, including Streaming Media, Philip Bloom, & Broadcast Beat, who each offered their own unique perspective on the industry and provided a well-rounded report of everything happening on the show floor.

2015
NAB 2015 was also the debut of the Online Video Conference, where executives from digital media firms gathered to discuss issues such as online original content, the migration to over-the-top (OTT) content and online advertising metrics. This set the stage for Ustream to show off our latest solution for marketers, Ustream Demand, along with our platforms for internal communicators and broadcasters: Ustream Align and Ustream Pro Broadcaster in addition to being the exclusive onsite live streaming provider for clients such as Teradek, Maxon, Sony, Adobe and JVC.

What does Ustream have in store for NAB 2016? Well, you are going to have to join us in Las Vegas to find out! Register today using the code “LV7669” to get access to the show for free until April 1st. We look forward to seeing you there!

REGISTER NOW

Video Terms: Live Streaming & Hosting Glossary

Posted on by

Video Glossary of terms for live streaming and video hostingA streaming media and video glossary that contains definitions of video terms, technologies and techniques related to live streaming, broadcasting and video hosting.

These video terms are relevant for both new techniques and legacy methods, which still have ramifications today when handling older media. The glossary will be continuously updated as the industry evolves.


# | A | B | C | D | EH | I | K | L | M | P | R | S | T | U | V


2 3 Pull Down (aka: Three-two Pulldown)

A process used to convert material from film to interlaced NTSC display rates, from 24 to 29.97 frames per second. This is done by duplicating fields, 2 from one frame and then 3 from the next frame or vice-a-versa.

608 Captions (aka: line 21 captions, EIA-608, CEA-608)

These captions contain white text against a black box that surrounds the text. It appears on top of video content and has support for four caption tracks.

708 Captions (aka: CEA-708)

These captions were designed with digital distribution of content in mind. They are a more flexible version of captions over the older 608 caption approach, allowing for more caption tracks, more character types and the ability to modify the appearance.

AAC (aka: Advanced Audio Coding)

This audio coding format is lossy, featuring compression that does impact the audio quality. It offers better compression and increased sample frequency when compared to MP3.

AC-3 (aka: Audio Codec 3, Advanced Codec 3, Acoustic Coder 3)

A Dolby Digital audio format found on many home media releases. Dolby Digital is a lossy format, featuring compression that will impact audio quality. The technology is capable of utilizing up to six different channels of sound. The most common surround experience is a 5.1 presentation.

Adaptive Streaming (aka: Adaptive Bitrate Streaming)

This streaming approach offers multiple streams of the same content at varying qualities. These streams are served inside the same video player and often differ based on bitrate and resolution. Ideally the player should serve the viewer the bitrate most appropriate to their setup, based on qualifications like download speed.

B-frames (aka: bi-directional Predicted Frames)

These frames follow another frame and only contain part of the image in a video. B-frames look backward and forward to a previous or later p-frame or keyframe (i-frame) and only contain new information not already presented.

Bandwidth

In relation to video, bandwidth is used to describe an internet connection speed or as a form of consumption in relation to web hosting. For speed, it is used as a point of reference for an internet connection. When it comes to streaming content, this is important as a viewer has to have enough bandwidth in order to watch. For web hosting, bandwidth can be used as a measure of consumption.

Bit Rate (aka: data rate or bitrate) 

The amount of data per unit of time. For streaming, this is in the context of video and audio content and often given in a unit of seconds, often expressed in terms of kilobits (kbps) and megabits (Mbps).

Buffering

Video streaming involves sending over video chunks of data to an end user. The video player will then create a buffer involving chunks that have not yet been viewed. This process is intended to let the viewer watch from the buffer in the event a video chunk is lost. Ideally the lost video chunk will be received before the buffer is emptied, causing no disruption in viewing. However, it’s quite possible for the viewer to have a connection speed that is poor enough that the video chunk does not arrive before the buffer is empty. If this occurs the video content will stop and the player will generally wait until more data is received. This will generally provide a buffering message while the player will wait for the lost video chunk and will attempt to rebuild the buffer.      

CDN (aka: Content Delivery Network)

These are large networks of servers that have copies of data, pulled from an origin server, and are often geographically diverse in their location. The end user pulls the needed resources from the server that is closest to them, which is called an edge server. This process is done to decrease any delays that might be caused due to server proximity to the end user, as larger physical distances will result in longer delays, and ideally avoid congestion issues. Due to the resource intensive process of video streaming, most streaming platforms utilize a CDN.

CRTP (aka: Compressed Real Time Transport Protocol)

This is a compressed form of RTP. It was designed to reduce the size of the headers for the IP, UDP (User Datagram Protocol) and RTP. For best performance, it needs to work with a fast and dependable network or can experience long delays and packet loss.

Deinterlace

Deinterlacing filters combine the two alternating fields found in interlaced video to form a clean shot in a progressive video. Without deinterlacing, the interlaced content will often display motion with a line-like appearance.

Embedded Player

This is a media player that is enclosed in a web source, which can range dramatically from being seen in an HTML document on a website to a post on a forum. Players will vary based on appearance, features and available end user controls. An iframe embed, which can be used to embed a variety of content, is one of the most common methods of embedding a video player.

H.264 (aka MPEG-4 Part 10, Advanced Video Coding, MPEG-4 AVC)

A video compression technology, commonly referred to as a codec, that is defined in the MPEG-4 specification. The container format for H.264 is defined as MP4.

HDS

Adobe’s HTTP Dynamic Streaming is an HTTP-based technology for adaptive streaming. It segments the video content into smaller video chunks, allowing switching between bit rates when viewing.

HLS

Apple’s HTTP Live Streaming is an adaptive streaming technology. It functions by breaking down the stream into smaller MPEG2-TS files. These files vary by bitrate and often times resolution, and ideally are served to the viewer based on the criteria of their setup such as download speed.

Interlaced Video

A technique used for television video formats, such as NTSC and PAL, in which each full frame of video actually consists of alternating lines taken from two separate fields captured at slightly different times. The two fields are then interlaced or interleaved into the alternating odd and even lines of the full video frame. When displayed on television equipment, the alternating fields are displayed in sequence, depending on the field dominance of the source material.

IP Camera (aka: Internet Protocol Camera)

A digital camera that can both send and receive data via the Internet or computer network. These cameras are designed to support a limited number of users that could connect directly to the camera to view. They are RTSP (Real Time Streaming Protocol) based, and for that reason are not largely supported by broadcasting platforms without using special encoders.

Keyframe (aka: i-frame, Intra Frame)

This is the full frame of the image in a video. Subsequent frames only contain the information that has changed between frames. This process is done to compress the video content.

Key Frame Interval (aka: Keyframe Interval)

Set inside the encoder or when the video is being encoded, the key frame interval controls how often a keyframe is created in the video. The keyframe is a full frame of the image. Other frames will generally only contain the information that has changed.

Live Streaming

Relates to media content being delivered live over the Internet. The process involves a source (video camera, screen captured content, etc), an encoder to digitize the feed (Teradek VidiU, Telestream Wirecast, etc), and a platform such as Ustream or another provider that will typically take the feed and publish it over a CDN (Content Delivery Network). Content that is live streamed will typically have a delay in a magnitude of seconds compared to the source.

Lossless Compression

Lossless encoding is any compression scheme, especially for audio and video data, that uses a nondestructive method that retains all the original information. Consequently, lossless compression does not degrade sound or video quality meaning the original data could be completely reconstructed from the compressed data.

Lossy Compression

Lossy encoding is any compression scheme, especially for audio and video data, that removes some of the original information in order to significantly reduce the size of the compressed data. Lossy image and audio compression schemes such as JPEG and MP3 try to eliminate information in subtle ways so that the change is barely perceptible, and sound or video quality is not seriously degraded.

MPEG-DASH (aka: Dynamic Adaptive Streaming over HTTP)

An adaptive bitrate streaming technology. Contains both the encoded audio and video streams along with manifest files that identify the streams. This process involves breaking down the video stream into small HTTP sequence files. These files allow the content to be switched from one state to another.

MPEG-TS (aka: Transport Stream, MTS, TS)

A container format that hosts packetized elementary streams for transmitting MPEG video muxed with other streams. It can also have separate streams for video, audio and closed captions. It’s commonly used for digital television and streaming across networks, including the internet.

P-frames (aka: Predictive Frames, Predicted Frames)

The p-frame follows another frame and only contain part of the image in a video. P-frames look backwards to a previous p-frame or keyframe for redundancies.

Program Stream (aka: PS)

These streams are optimized for efficient storage. They contain elementary streams without an error detection or correction process. It assumes the decoder has access to the entire stream for synchronization purposes. Consequently, programs streams are often found in physical media formats, such as DVDs or Blu-rays.

Progressive Video

A video track that consists of complete frames without interlaced fields. Each individual frame is a coherent image at a single moment in time. This means a video could be paused and the entire image could be seen. All streaming files are progressive, and this should not to be confused with the process of keyframes and p or b frames.

Reverse Telecine (aka: Inverse Telecine, IVTC)

This is a process used to reverse the effect of 3 : 2 pull down. This is achieved through removing the extra fields that were inserted to stretch 24 frame per second film to 29.97 frames per second interlaced video.

RTMP (aka: Real Time Messaging Protocol)

Is a TCP-based protocol that allows for low-latency communication. In the context of video, it allows for delivering live and on demand media content that can be viewed over Adobe Flash applications, although the source can be modified for other playback methods.

RTP (aka: Real Time Transport Protocol)

A network protocol designed to deliver video and audio content over IP networks and runs on top of UDP. The components of RTP include a sequence number, a payload identification, frame indication, source identification, and intramedia synchronization.

RTSP (aka: Real Time Streaming Protocol)

A method for streaming video content through controlling media sessions between end points. This protocol uses port 554. Using this method, data is often sent via RTP. RTSP is a common technology found in IP cameras. However, some encoders, like Wirecast, can actually take the IP camera feed and deliver it in an RTMP format.

Silverlight

Microsoft’s Silverlight is both a video playback solution and an authoring environment. The user interface and description language is Extensible Application Markup Language (XAML). The technology is natively compatible with the Windows Media format.

Smooth Streaming (aka: IIS)

Microsoft’s Smooth Streaming for Silverlight is an adaptive bitrate technology. It’s a hybrid media delivery method that is based on HTTP progressive download. The downloads are sent in a series of small video chunks. Like other adaptive technology, Smooth Streaming offers multiple encoded bitrates of the same content that can then be served to a viewer based on their setup.

Streaming Video (aka: Streaming Media)

Refers to video and/or audio content that can be played directly over the Internet. Unlike progressive download, an alternative method, the content does not need to be downloaded onto the device first in order to be viewed or heard. It allows for the end user to begin watching as additional content is constantly being transmitted to them.

Transcoding

The process of transcoding involves converting one video type into another format. This is often done to make a file compatible over a particular service.

Transrating

Involves changing a video source from one bitrate to a different one. This process is often done to accommodate adaptive bitrate technologies, generating lower quality bitrates.

UDP (aka: User Datagram Protocol)

The most universal way to transmit or receive audio or video via a network card or modem. In terms of real-time protocol, RTMP (Real Time Messaging Protocol) is based on TCP (Transmission Control Protocol), which led to the creation of RTMFP (Real Time Media Flow Protocol) that is based on UDP.

Video Compression

This process uses codecs to present video content in a less resource intensive format. Due to the high data rate of uncompressed video, most video content is compressed. Compression techniques can feature overt processes such as image compression or sophisticated techniques such as inter frame, which will look for redundancies between different frames in the video and only present changes via delta frames from a keyframe point.

Video Encoding

A process to reduce the size of video data, often times with audio data included, through the use of a compression scheme. This compression can be for the purpose of storage, known as program stream (PS), or for the purpose of transmission, known as transport stream (TS).

Video Scaling (aka: Trans-sizing)

A process to either reduce or enlarge an image or video sequence by squeezing or stretching the entire image to a smaller or larger image resolution. While this sometimes can just involve a resolution change, it can also involve changing the aspect ratio, like converting a 4:3 image to a “widescreen” 16:9 image.

VOD (aka: Video On Demand)

VOD refers to content that can be viewed on demand by an end user. The term is commonly used to differentiate between live content, as VODs are previously recorded. That said, content can be presented in a way that is not on demand but using previously recorded content, such as televised programming that does not give the end user control over what they are trying to watch.

 


 

Please visit our Support Center or Contact Sales for Ustream compatibility questions regarding the terms found in this video glossary.

Live at IBM Interconnect 2016

Posted on by

IBM Interconnect is the premiere event to learn how to get the most out of your existing investments with hands-on training in cloud and mobile solutions built for security, powered by cognitive, and equipped with advanced analytics. And now that Ustream is a part of the Cloud Video Services unit, we had the privilege of being a part of the event.

In addition to having the opportunity to meet our new IBM friends & family face to face, we were also excited for the chance to get hands on expierence and an insiders look into some of the amazing technology that IBM is a part of. From dancing robots to a BB8 droid that you can control with your mind, the future of IBM is evolving. We may be a bit biased, but the biggest stars of the show were located in the Cloud Video Services unit, specifically the folks at Clearleap, Aspara, Cleversafe and of course Ustream.

We were also honored to have the opportunity to hit the main stage at the Cloud/Mobile expo Theatre, where our VP of Product, Alden Fertig addressed the community and discussed how video has become a global medium for communication for entertainment, information and applications with his presentation: “Video Has Become a ‘First Class’ Data Type in Enterprise”.

Thank you for joining us at IBM Interconnect 2016, and we look forward to seeing you again next year! In the meantime, reach out to one of our sales representatives to learn more about how the IBM Video Cloud can help you and your business make the big leap into the future.

CONTACT US

The New Viewing Experience is Here

Posted on by

We’re happy to announce that the new Viewing Experience that we’ve announced in December is now publicly available for all broadcasters!

The new channel design comes with lots of benefits:

  • Responsive layout looking great on all screen sizes
  • Large player and more room for the chat
  • Large video gallery
  • Interactive Chat & Social Stream
  • Description now available for VODs
  • Less ads on the page and a lot more room for your content.

Customization options include:

  • Cover image
  • About section with rich text formatting, including images and links
  • Links to external websites (Facebook, Twitter, Paypal etc)
  • Links to your other channels on Ustream

To see the new look in action, check out these channels that are already on the new design:

A great thing about the new design is that all customization will carry over to mobile. The iOS and Android app updates will be released in a few weeks.

From now on all new channels get the new design by default. If you have an existing channel you can choose to migrate to the new design manually until April 2nd, 2016 when the old channel design will be discontinued.

Visit your Dashboard to see what’s new and don’t forget to leave us feedback!