Video Compression, Codecs and Transcoding

dpreview by Uwe Steinmuelller

Video compression: your friend and enemy

Video compression is clearly our friend because without a lot of compression we would have a very hard time handling the massive data we get from a 1080p video stream. Think of two mega pixels per frame at 24, 30 or 60 frames per second (translates to 48, 60 or even 120 mega pixels data per second). On the flip side, video compression reduces the possible image quality we can get. It is good to better understand how we deal with the implications of compressed data. It is kind of like the difference between Raw and JPEG images for still cameras, though with video the compression is a lot stronger. That said, it's important to consider that when we watch a movie, the moving image is rarely analyzed as critically as still images

Overall video compression is about the trade-off between:

  • Data volume
    • Data storage needs
    • Data processing speed (in camera, on computer)
  • Image quality
    • Detail
    • Color

Video Compression Details

Because the data is digital some non-destructive algorithms are also used, but we don't need to worry about them because they have no influence on the image quality.

Here are the different techniques commonly used to get the video stream down to a manageable data volume:

  • Detail Compression
    This is more like JPEG where the algorithms try to preserve the overall impression of the image but sacrifice finer detaill.
  • Bit depth:
    8-14 bit. All prosumer video cameras and HDSLRs create video streams at 8 bit, which is kind of like shooting 8 bit JPEGs with stronger compression. And just like when you shoot stills, if you have more than 8 bits per color available then you have more latitude during color corrections (called also 'grading'). At 10 bit there are 1024 shades of each color channel, as opposed to only 256 at 8 bit.
  • Chroma Subsampling
    'Because the human visual system is less sensitive to the position and motion of color than luminance, bandwidth can be optimized by storing more luminance detail than color detail. At normal viewing distances, there is no perceptible loss incurred by sampling the color detail at a lower rate. In video systems, this is achieved through the use of color difference components. The signal is divided into a luma (Y') component and two color difference components (chroma)' read more...

Leave a comment

Please note, comments must be approved before they are published

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.