top of page

What a Codec & Container Does?

Updated: Aug 2, 2019

The distinction between codecs and container file formats is often ambiguous. This is in part due to the general lack of standardization, confusing marketing terms and filename extensions. This page attempts to clarify this distinction briefly and without going into technical details. If you want to learn more about containers and codecs, you should probably look at Wikipedia's page on containers and codecs.



Codecs


Codecs stand for "Compression and Decompression" it's ways of "coding" and "decoding" streams. Their job is typically to compress data (and decompress it when playing it back) so that you can store and transmit files with a smaller filesize.


A codec is a method for making video files smaller, usually by carefully throwing away data that we probably don’t really need, and they’re pretty smart about how they do that. A few years ago, I created a video that covers the main compression techniques that many codecs use. It’s not required viewing to understand this article, but it certainly won’t hurt.



If you’re skipping the video, here are some very basic explanations:


  • Chroma subsampling: Throws away some color data (4:4:4 is no chroma sampling. 4:2:2 is some chroma subsampling.4:2:0 is lots of chroma subsampling). Bad if you’re doing color-correction. Really bad if you’re doing green screen or VFX work.

  • Macro-Blocking: Finds blocks (varying size) of similar colors and makes them all the same color. Bad for VFX and color-correction. Almost all codecs use this to some degree, and the amount tends to vary with the bitrate.

  • Temporal compression: Uses previous frames (and sometimes following frames) to calculate the current frame. Bad for editing.

  • Bit depth: The number of possible colors. Deeper bit-depth (larger numbers) is good for color-correction and VFX.


Codec Comparison Table


I’ve also pulled together a list of all of the most common codecs used in the postproduction world. This list can help you compare different codecs against each other and make the best decision for your project.

There are many different codecs that can be used in the editing process, but the ones I’ve included are by far the most common. There is a significant advantage to using popular codecs – they are more likely to work on your system, your client’s system, your system-in-five-years, etc. And it’s easier to find help if something goes wrong.

Open the table in a new tab, and think about which codecs might be a good fit for you as you read through the article.



Lossyness


One of the columns in the table is “lossyness,” which is an important concept with codecs. When I’m talking about lossyness, I don’t necessarily mean what your eye sees. I mean the amount of data that is retained by the codec, only some of which you can see. The question is: If I had an uncompressed image, and then I compressed it with this codec, how similar would the new image be to the old image? How much information is lost in the transcode? If the two images are very similar, then the codec is not very lossy, and if they’re pretty different, then it’s more lossy.

The lossyness is a combination of the techniques that the particular codec uses and its bitrate. A more lossy codec is not necessarily “bad.” In some cases (when viewing online, for instance), it’s really not necessary to retain 100% of the original image.


Using a more lossy codec can be a really smart move because of how much space it saves.


If the image looks just as good to my eye, then why should I care if it’s technically ‘lossy’?


You should care because you may want to change the image. If you are doing any sort of color correction, then you will be changing the image, allowing you to see elements of the image that weren’t visible (or prominent) when you captured it.

For example here is an image that was captured raw.



Here is a screengrab of it compressed with H.264, using a standard YouTube-recommended settings.



And then compressed with DNxHD 350x:



They all look pretty much the same, don’t they? The visual quality is just about the same, and the H.264 file is a fraction of the size of the DNxHD file. This is why it’s the recommended setting for YouTube. It looks just about as good to the eye, and the file is much easier to upload to the internet.


The trouble with the H.264 version, however, comes when you try to make changes to the image. What if you wanted to increase the exposure?





Now we can see where the highly-compressed image falls apart. Her hair and shirt look terrible in the h.264 image, and the buildings by the river look all mushy.






This is why you really want a high-quality codec when you capture the image – because you will probably want to make changes later on, but you don’t know yet what those changes might be. You’ll want to tweak the color and contrast, maybe tweak the speed, maybe add some VFX. A highly-compressed file doesn’t allow for those changes without breaking down.

This is why it’s a good idea to capture your footage in 10-bit even if you may be outputting an 8-bit file in the end – you don’t know, when you shoot, which bits you’re going to want.



Container "Wrapper"


Container is what we typically associate with the file format. Containers "contain" the various components of a video: the stream of images, the sound, and anything else. For example, you could have multiple soundtracks and subtitles included in a video file, if the container format allows it. Example of popular containers are OGG, Matroska, AVI, MPEG.





Resource: https://blog.frame.io/2017/02/15/choose-the-right-codec/

117 views1 comment

Recent Posts

See All

1 則留言


vannoeunlong2017
vannoeunlong2017
2019年7月17日

on Friday exam

按讚
bottom of page