We caught up with John Williams recently, director of emerging markets at JDSU, to talk about a specialty of his – video delivery and content in the “new world” – Internet TV, on-demand services, among other next-generation video services. Quality is the number one concern – that is, how to help service providers make the ultimate experience for consumers. Recent IPTV-related news underscore how explosive the trend is. For example, Deutsche Telekom reached 1.375mn subscribers for its German IPTV service 'T-Home Entertain' by the end of the third quarter of this year, adding 74,000 customers in the three-month period. Cisco even launched the industry's first integrated wireless IPTV service with AT&T!
Based on a recent briefing, here’s a sometimes technicial, but always insightful commentary from an informative source on video - our very own John Williams:
One communications test and measurement technique is monitoring to ensure there is a “quality of experience” for high-bandwidth, high-capacity services like video – how important is this for today’s service providers?
Quality of experience (measuring the quality of what the end user actually experiences) is an important part of how service providers are managing the deliver of video services. It cannot be measured directly since it is by definition an individual “perceptual” view of quality. However, measuring quality of service (measuring the quality of the service actually being delivered) – these metrics that can be measured objectively and that helps with quality of experience. In addition, service providers have determined for themselves that certain metrics deal directly with quality of experience outcomes. For example, packet loss is a quality of service metric that above certain thresholds damages quality of experience. Most providers have now added to their networks some kind of error recovery mechanism. For example, MediaRoom from Microsoft includes an error recovery approach that re-transmits to the set top box lost packets. In this way they repair, up to certain limits, network performance problems in this key area. Cisco offers a similar error recovery approach based on re-transmission of lost packets. The quality of service metric thresholds are then set to match the specific network design in place.
The scoring of MOS, or mean opinion score, video programming has become a common way to attempt to gauge quality of experience directly, at least the program content itself. Typically this means scoring the audio portion and the video portion of a program separately and then also making a combined audio/video score. This does provide an indication of quality of experience. But, quality of experience involves more than just program content. It also includes what we call transactional quality. For example, channel change time, or responsiveness to control commands for a video on demand (VOD) program like pause and play that directly impacts quality of experience and are routinely measured now in most networks by test and measurement solutions.
How does test and measurement help quality Internet Protocol Television (IPTV), catch up and on-demand services?
Test and measurement technologies can help with video on demand transactional quality. The same is true for catch-up services. All will be impacted by packet loss in the network so that will always be a key metric to manage. But, the distribution network is not the only thing in the video delivery eco-system that impacts quality. Source content and processing is a key component. Encoders must operate properly, programming from content providers must be in good quality. Test and measurement resources must be deployed at the ingest points for content to validate this source quality.
What are the “quality thresholds” set for existing IPTV and other pay-TV service? Are they being met?
Any threshold must be established in light of a given network design. A threshold for packet loss would be set differently in a network with MediaRoom versus one that did not have such a robust error recovery mechanism. FEC, or forward error correction, operating at the packet level not bit level can be very effective, but would have different thresholds for packet loss. But, in general packet loss reaching the subscriber can not exceed 0.1% without becoming a quality issue. This does not speak to the distribution of loss. That is where the mean opinion score (MOS) score comes into play since it will the distribution of loss as part of its analysis. As an example, a loss rate will be the same for a given number of packets lost in a given time period, but if the same number of packets were lost at the very beginning of a new program versus being lost a little at a time throughout the program, the later would be draw a much lower quality opinion quality of experience. Another threshold might be channel change times - they should be under 0.5 seconds.
To what extent is proactive monitoring (and fixes) possible and economic to achieve, rather than waiting for customer complaints before assessing issues?
With an integrated “service assurance system” (ie., ensuring the end-user quality of a service) approach, service providers can be very proactive. Many new concepts have been deployed toward that end. One example is the dynamic line management (DLM) which includes the continuous gathering of network quality of service data from all active DSL lines, for example in an FTTx, and compares the individual line performance to a set of pre-programmed DSL line profiles. As specific metric threshold are exceeded or not reached, the line configurations are automatically modified. This has the effect of increasing line stability (fewer errors and fewer re-syncs caused by line conditions). This all happens daily and if handled well prevent problems from being seen or experienced by the subscriber. But, network performance monitoring of key metrics is the critical item and then having a proactive process for addressing issues before they rise to the level of customer awareness.
How well is quality measurement and monitoring integrated with customer care provision?
This is an area that is also receiving a lot of attention. It can take many forms - one JDSU supports is to tie a set of questions for the trouble desk, when speaking with a customer with a problem, to the conversation which will elicit critical pieces of data which are tied to the actual approach to trouble resolution. This then provides the field crews with data that can help them get to a root cause quickly. For example, this can help separate network performance issues from source content issues, or point to specific network segments as being the most likely place to find and then fix the problem.
What measures are service providers taking to ensure that customer care operatives have actionable reports and information to hand?
They provide much better access to past records of activity around a given subscriber so that the customer feels that the provider knows what has occurred in the past and can more readily take action in the present. They are also connecting the field tech to back office data when in the field so they can see the larger picture, maybe even past performance data which would directly help with the current situation. All of the key players are able to communicate in a timely manner with relevant information.
What about customer care’s role with ensuring quality of service?
They are the face of the provider when there is a problem. Being knowledgeable not only about the services, but also about some of the network quality of service issues will help them provide useful info to the customer. Following well scripted discovery questions can elicit key info which will result in more rapid and efficient support activities. Customer care plays a key role!
What about video quality on the web. To what extent does the development of web-based streaming video services, paid or unpaid, mean that providers are investing in “quality assurance” measures?
Service providers will be offering access to content that may be available on the web, but with better quality because it comes through their networks. New CNDs, or content delivery networks, include many mechanisms that can result in better quality. For example, caching of high demand content here, the subscriber greatly improves response times and delivered quality not possible over the public internet. This may be coupled with early access to certain content. It may include an enhanced search feature suite coupled with intelligent history of preferences based on past consumption, among other.
How far do the requirements differ from broadcast and managed-network on-demand services?
Broadcast service defines a need for distributing programming with ability to multi-cast where on-demand means a unit-cast access. Managing a multi-cast network lends itself to more determinist planning and engineering since programming is on a more fixed time schedule of viewing. On-demand service by its very nature of being on-demand is much more difficult to manage since demand for a given program can vary so much.
What do you identify as the most recent developments in this area?
New technologies associated with CDN’s, like dynamic caching, and server load balancing are enabling the on-demand service to reach new levels of prompt access and high delivered quality. New dynamic streaming protocols such as Dynamic Adaptive Streaming over HTTP (DASH) are being standardized and adopted which provide superior quality to mobile devices as well as traditional TV’s in these service areas.
John Williams, director of emerging markets, JDSU's Communications Test and Measurement business unit