In video coding, transparency is when the compressed result is perceptually identical to the uncompressed source.
But the challenge is how to measure transparency.
Today the most reliable way is by doing subjective experiments: asking humans to rate the quality of videos on a scale of 1 (poor) to 5 (excellent). Taking the average of these ratings gives the so-called Mean Opinion Score (MOS).
MOS measurements take much time, cost a lot, and are hard to repeat. Plus, can’t embed real-time quality management systems or automated design processes.
We need algorithms that measure video quality in real-time and give reliable metrics for all kinds of content and distribution networks. One that works for 5G may not do well for content editing on the cloud.
There are currently two types of video quality metrics.
First, full-reference (FR) metrics need the original uncompressed video. This is simply not available in many real-time applications, though it’s useful for lab experiments whenever subjective tests are not practical (due to time and cost limitations).
Second type is no-reference (NR) video quality metrics. There are quite a few for still images, yet none with adequate performance for video. The problem is NR metrics adopted from still imaging don’t work well for temporal artifacts like motion jerkiness or “mosquito” effects.
RayShaper researchers are using AI algorithms to develop NR metrics for video. Deep convolutional neural networks are used to extract features from compressed content. Selected features in both spatial and temporal characteristics are then used to measure perceived quality. Our goal is to help cloud-based content producers and distributors provide transparent viewing quality.