H.264 Defined

August 22, 2022 2 mins to read
Share

Video signals can require enormous capacity for transport and storage. For these reasons, video signals are almost always processed to reduce the signal’s size. This is called “video signal compression” (VSC).

Video is a series of still images called frames, that when viewed at a high enough rate, achieves the illusion of motion. VSC achieves a high compression rate by saving only the changes from one frame to the next, instead of each saving each complete frame. With a well designed computer program, the compression can be achieved with very little distortion.

The satellite TV companies, “The Dish Network”, and “DirecTV”, were pioneers in the use of signal compression. These companies have used a standard called MPEG-2 (MPEG is short for the “Motion Pictures Expert Group”). Without these techniques, only a fraction of the channels currently possible, could be managed by the existing satellite systems.

Compression has also been applied to audio signals. For example “MP3” is actually an abbreviation for “MPEG Audio layer 3”, which is part of an Audio/Video compression program. The MP3 player phenomena would never have happened with signal compression.

MPEG-4 is one of the newer compression methods, designed specifically for efficient video signal compression. MPEG-4 was originally developed for transporting video signals over the Internet, but has found wide success with HDTV signals as well. DivX is an example of an MPEG-4 encoder.

Sony was the first company to commercially introduce H.264 (also known as MPEG-4 Part 10), a more advanced revision of the MPEG-4. Blue-Ray DVD, for example, uses the H.264 standard for storage. This standard has also gained rapid acceptance in the security camera industry. H.264 is 2-3 times more efficient than the MPEG-2 standard.