Software to make smarter video streaming decisions

MobileCisco, Intel, Verizon develop neuroscience-based algorithms that adapt video quality to demands of human eye.

Algorithms that understand the way we view video and adjust the quality to the demands of the human eye could keep wireless networks from buckling under mounting traffic driven, in part, by demand for mobile video.

The algorithms were developed by a team of industry and university researchers tasked with rethinking how video is distributed over wireless networks. Compared with data rate algorithms in use today, these new algorithms, some of which are based on neuroscience, effectively halve data traffic by optimizing delivery and playback.

“Understanding how to adapt video streams based on human perception of that video means more people can get higher quality video experiences when using mobile networks,” said Jeff Foerster, principal engineer at Intel Labs, who worked on the algorithm as part of the VAWN, or video aware wireless network project.

Funded by Cisco, Intel and Verizon, the project began in 2010 but is now winding down. “Intel Labs looked at treating video differently than just bits of data in order to optimize devices and networks for better video experiences,” said Foerster.

With the VAWN project nearing conclusion, Intel and Verizon are behind a new collaborative research project exploring standards-based technologies to advance wireless networks.

“We’re working on a new collaborative program (dubbed 5G: Transforming the User Wireless Experience) with Intel that looks beyond 4G at what we do next,” said Chris Neisinger, executive director of network technology and planning at Verizon. “I see it as an extension of VAWN with a new focus that finds ways to deploy what we’ve learned.”

“Video will be part of the new research project, but not the sole focus as it was with VAWN,” said Neisinger.

Wireless network operators, Internet video services, video application makers and consumer electronics manufactures can use the open research findings and algorithms from VAWN to experiment, which are available publically, to experiment with delivering quality experiences more efficiently across a variety of device types and screen sizes.

Mobile Video Explosion

Last year, video accounted for more than half of mobile network traffic for the first time, and is expected to increase 16-fold between 2012 and 2017, when video will account for more than 66 percent of all mobile traffic, according to the recent Cisco Video Network Index.

The three-year long VAWN effort anticipated the impending wireless traffic jam.

“Video is driving a lot of behaviors,” said Neisinger. “It’s not just about YouTube. Twitter, Facebook, Instagram and Pinterest are also using video. It’s showing up everywhere. We’re experiencing a rise of video use in all types of applications.”

Researchers say the new algorithms are a step toward building more human intelligence into experiences.

“By focusing the project on video, we’re able to really analyze video quality from the viewer standpoint, then understand how we can improve quality of service to every screen type while giving wireless networks smarter options for efficiently delivering video to consumers,” said Foerster.

Algorithm Based on Human Experience

Not just a corporate research project, professors and students from Cornell University; Moscow State University; University of California, San Diego; University of Southern California; and University of Texas, Austin also contributed.

Lark Kwon Choia UTa  postgraduate studen

Lark Kwon Choia, a postgraduate student from University of Texas, Austin pushed the VAWN project beyond computer science in other disciplines, notably neuroscience. This led to the creation of software algorithms based on an understanding of the way humans view video on various screen sizes.

Collaboration between university and industry researchers pushed the VAWN project beyond computer science in other disciplines, notably neuroscience.

“We needed to understand how the brain works,” said Lark Kwon Choia, a post graduate student from University of Texas, Austin who has been an intern on the VAWN project for two years. “The human brain is tricky because it can be influenced by emotion.”

Choia meticulously noted when and why people forgive poor-quality video and conducted digital autopsies on video to understand how the video was encoded when recorded, whether it was delivered through a wired or wireless network, and notes the type of device decoding the video for the viewer.

He found that human perception is forgiving of video quality in certain situations, especially with motion versus more static scenes. That insight led to algorithms adapting video quality based upon circumstance.

Though the VAWN project is winding down, Foerster says that Intel is exploring how the perceptual video quality metrics that came out of the research can be useful for future consumer and enterprise products.

Neisinger admits that he was hoping the VAWN project would result in something Verizon could implement into the network quickly. “We realize it’s a long process, but this type of collaborating is making things move faster,” he said.

About these ads

One thought on “Software to make smarter video streaming decisions

  1. Pingback: Software to make smarter video streaming decisions | Movin' Ahead

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s