FFMPEG 5 “Lorentz” Release

FFMPEG 5.0 “Lorentz”, a new major release of FFMPEG is out now!

According to the Jean-Baptiste Kempf’s JB Kempf Blog, this new release is very important for major API updates and several new features, including:

• A few new decoders, including a native Speex decoder and decoders for MSN Siren, GEM Image and Apple Graphics (SMC);
• Big additions to VideoToolbox support with VP9 and Prores decoding and Prores encoding;
• Improvements on Vulkan support and notably Vulkan filters;
• Optimizations for the loongarch platform;
• Slice-threading in swscale;
• RTP packetizer for uncompressed video (RFC 4175);
• Support for libplacebo video filter for all your HDR needs;
• Numerous audio and video filters, notably segment, latency, decorrelate and several color filters;

The official release note is available here.

FFMPEG, Closed Captions, DVB/Teletext and the MPEG-TS container: a lesson learned

Before getting started, here are some basic concepts for clarity purpose:

  • Subtitles” is a synonym for “Captions” in the US;
  • In the TV Broadcast world the term “Closed Captions” is intended mainly for the hard of hearing audience (e.g.: subtitles that can be switched on and off and that are delivered as a separate asset from the main program);
  • Closed Captions are intended for Viewers – watching television, or for Broadcasters – to increase reach, or meet regulatory requirements, or again for Service Providers – selling services for making TV accessible, distributing TV either traditionally or over the web and for Manufacturers – selling equipment in the broadcast chain and domestic environments.
  • BBC has provided the World’s first Teletext Service in 1974 (Ceefax): RAI’s “Televideo” in Italy and other European Broadcasters followed, using Teletext also as a subtitles service.
  • EBU Teletext has long been the standard format for hard of hearing subtitling and multi-lingual subtitling in Europe, using the Teletext systems. More correct name is: “CCIR Teletext System B“, while current standard name is “Enhanced Teletext specification ETS 300 706“.
  • In today’s digital world, “EBU Teletext” is now DVB/Teletext;
  •  The Teletext system was also used for a number of experimental systems, notably in the United States, but these were never as popular as their European counterparts and most closed by the early 1990s. 
  • In the US and Canada, the standard for Closed-Captions is called “CEA-708“, which is an evolution of the “CEA-608” standard.

A video can have:

  • Embed Captions“: namely a stream of data (the captions), “injected” (or muxed) within the main video file;


  • A “Sidecar File“: a separate data file which contains the captions, in various subtitling formats;


• “Burned Subtitles” (or Open Captions): meaning that the subtitles are hardcoded (rendered) and always displayed in the video.

FFMPEG can produce (encode/decode/transcode) both “Embed” and “Burned” captions, with some specific limitations for the DVB/Teletext format, as discussed later on this article.

In the broadcast world, the “Embed Captions” are often contained (muxed) into an MPEG-TS file (.ts).

The “MPEG-Transport-Stream“, (MPEG-TS, MTS) or simply transport stream (TS) is a standard digital container (which is a file format that allows multiple data streams to be embedded -or muxed- into a single file, usually along with metadata for identifying and further detailing those streams format) for transmission and storage of audio, video, and Program and System Information Protocol (PSIP) data.

An MPEG-TS file containing 1 video stream, 2 audio streams and a subtitles (data) stream could be displayed by FFMPEG as follow:

FFMPEG - MPEG-TS output with DVB/Teletext Subtitles
FFMPEG’s Output of an example MPEG-TS (.ts) file, with DVB/Teletext Subtitles

MPEG-TS it’s a standard format used in broadcast systems, such as DVB (Digital-Video-Broadcasting), ATSC (Advanced Television Systems Committee: an American set of standards for digital television transmission over terrestrial, cable and satellite networks) and IPTV (Internet Protocol Television).

At the time of writing this article, FFMPEG (latest version 4.4) can handle (encode/decode/transcode) DVB/Subtitle format (dvb_sub), but can only decode (read or copy) DVB/Teletext Format (dvb_teletext) data streams.

In order to make FFMPEG able to decode DVB/Teletext stream data you will need to install the libzvbi library

So, if you plan to include FFMPEG into your Closed-Captions workflow using the MPEG-TS container, please bear in mind that you won’t be able to transcode DVB/Teletext subtitles (e.g.: from “.srt” to DVB/Teletext), but you will be able to produce subtitles in the DVB (dvb_sub) format.

What are the differences between DVB (FFMPEG’s dvb_sub) and DVB/Teletext (FFMPEG’s dvb_teletext) formats?

DVB/Teletext: this is the standard for wrapping the good old EBU Teletext into a DVB signal. This type of subtitles can’t be customized. (No fonts, no size, little to no color customizations).

DVB Subtitling: This is a subtitle bitmap image that is compressed and sent as a DVB transport stream along with the DVB video and audio and decoded and displayed. The latest specification also includes HD bitmap support. This gives the broadcaster full control of the appearance of the subtitles, as well as full support for all languages.

In some case scenarios you might need to produce both formats (DVB/Bitmap + DVB/Teletext): if this is your case unfortunately FFMPEG won’t help you at this time and you might need to come up with a smart solution, before exploring any paid solution (…like some +7.000$ Software which can produce both DVB/Teletext and DVB/Bitmap Closed Captions and Mux the data streams into the MPEG-TS file…using FFMPEG!)

A lesson learned.

How to create an Instagram Video with a Still-Image and an Audio File (or a YouTube Art Track)

Sometimes you may want to publish a very simple Instagram post (or a Story, or a Reel) which is composed by a simple still picture, plus an audio file, in a similar way of a YouTube Art Track.

While this can be one of the easiest tasks to accomplish with FFMPEG, you might overlook some important aspects of the process.

You might think that the easiest and quickest way will be something like this:

ffmpeg -i your_picture.jpg -i your_audio.wav -map 0:v -map 1:a your_instagram_post.mp4

Unfortunately this is the wrong way to do it.

Although you will end up with a Mp4 video, without any errors on the FFMPEG side, you will end up with upload issues or errors once you will try to upload it to Instagram (i.e.: “Instagram video must be 0 seconds or more” or a blank file, or an unseekable video, or similar issues).


First of all, you want to check that your final composition (still picture + audio) are ready for the Instagram’s tech specs. Specifically you will need to have a 1080×1080 output for an Instagram post, with AAC audio, in stereo at 128 Kbps.

You will need to specify also a frame-rate (whether 25 or 30 FPS, or else) and adjust the audio parameters accordingly.

Thus, a working example will be:

ffmpeg -framerate 1/60 -i your_still_picture.jpg -i your_audio_track.wav -map 0:v -map 1:a -r 25 -c:v h264 -tune stillimage -crf 18 -c:a aac -b:a 128k -ac 2 -ar 44100 -pix_fmt yuv420p -max_muxing_queue_size 1024 -shortest your_final_instagram_video.mp4

Let’s breakdown this example.

-framerate 1/60: this will produce a video of 60 seconds and it’s to be used when the source input is a still picture, such in this example of one single JPG. Reference: Video and Audio File Format Conversion

-i your_still_picture.jpg: is an example input of a 1080×1080 pixel image, in JPG format

-i your_audio_track.wav: is an example audio input in WAV format

-map 0:v -map 1:a: Mapping option will specify how the various inputs are mapped for the final output. In this case the first input (0:v) will be selected as the main video source, while the second input (1:a) will be used as the main audio source.

-r 25: is the option needed for creating an output video that runs at 25 FPS. Reference: FFMPEG Main Options

-c:v h264 -tune stillimage -crf 18: these are example settings of a basic h264 encoding, with an optional tuning for a still image source. Reference: Encode in h264.

-c:a aac -b:a 128k -ac 2 -ar 44100: theses are the settings for an example audio encoding, as per Instagram tech specs: they will produce a standard AAC file, at 128Kbps, in Stereo, at 44.100 kHz. For state-of-the-art audio results you may want to investigate the FDK_AAC library (namely The Fraunhofer FDK AAC codec library) with the option “-c:a fkd_aac“, which will require the “–enable-libfdk-aac” flag when compiling FFMPEG during the installation process).

-pix_fmt yuv420p: this is optional, and it’s an instruction for a better compatibility with players such as QuickTime. (More on this point here).

-max_muxing_queue_size 1024: this instruction is for older builds of FFMPEG. Sometimes this instruction is required, when a “Too many packets buffered for output stream” error is returned by the shell. For the sake of this example, a sample value of 1024 is indicative and has been tested for the sake of the above mentioned example.

-shortest: what if your audio track is longer than the required lenght of your output video? With this instruction you will tell FFMPEG to use the shortest input file as the main lenght reference. In our example the lenght, expressed in the first input, is 15 seconds: thus, the final output (despite the longer audio input) will only last 15 seconds. Reference: FFMPEG Advanced Options

Testing FFMPEG Commands with FFPLAY

As mentioned in the official documentation, FFplay is a very simple and portable media player using the FFmpeg libraries and the SDL library.
SDL stands for “Simple DirectMedia Layer”, meaning a cross-platform development library designed to provide low level access (access to the hardware of a computer) to audio, keyboard, mouse, joystick, and graphics hardware. It can be used to make animations and video games.

FFPlay is often used as a testing tool in order to verify the FFMPEG output.

In order to test your FFMPEG command with FFPLAY, all at once in a single command, you can use the PIPE function, namely sending the resulting output of your first command (an action performed by FFMPEG) as an input for the next instruction (to be played by FFPLAY).

Let’s take the following example: generate a Color Bars Test Pattern (as previously discussed in this article) and reproduce it with FFPLAY.

The code will be as follow:

ffmpeg -re -f lavfi -i "smptehdbars=rate=30:size=1920x1080" \
-f lavfi -i "sine=frequency=1000:sample_rate=48000" \
-vf drawtext="text='HELLO WORLD! %{localtime\:%X}':rate=30:x=(w-tw)/2:y=(h-lh)/2:fontsize=48:fontcolor=white:box=1:boxcolor=black" \
-f flv -c:v h264 -profile:v baseline -pix_fmt yuv420p \
-preset ultrafast -tune zerolatency -crf 28 -g 60 \
-c:a aac -f matroska - | ffplay -

ffmpeg -re : FFmpeg’s “-re” flag means to “Read input at native frame rate. Mainly used to simulate a grab device.”

-f lavfi -i “smptehdbars=rate=30:size=1920×1080”: this is the Libavfilter input virtual device. The command will generate a standard Color Bars as per the SMPTE specs.

-f lavfi -i “sine=frequency=1000:sample_rate=48000“: this will generate a sine tone at 1 kHz so to have an audio signal.

-vf drawtext=”text=’HELLO WORLD! %{localtime\:%X}’:rate=30:x=(w-tw)/2:y=(h-lh)/2:fontsize=48:fontcolor=white:box=1:boxcolor=black”: this will overlay a test message on top of the color bars, displaying a “Hello World!” text message plus the local time of your remote server or local computer, at 30 frame-per-second, with the default font, 48px size, with a white text color on top of a black box.

-f mkv -c:v h264 -profile:v baseline -pix_fmt yuv420p: this will instruct FFMPEG to produce a Matroska file encoded in h264, and suitable for all players including Quicktime (see “Encoding for Dumb Players“).

-preset ultrafast -tune zerolatency -crf 28 -g 60: this are h264 settings suitable for streaming purposes (quality and keyframe interval) and are used here as an example.

-c:a aac -f matroska – | ffplay – : this will instruct FFMPEG to produce an audio in AAC format, a video in Matroska format and it will pipe the final output into an instance of FFPLAY.
(Notice the “-” symbols and the pipe symbol “|”).

Output result:

Generate SMPTE Color Bars with 1kHz Audio Tone

The following are some useful FFMPEG command for testing purposes.
You may want to change the output at the end (in this example set as an RTMP output) and use it for checking the status of a RTMP transmission, the levels of your signals, etc.

The following codes will create a standard color bars with 1 kHZ test tone, according to SMPTE recommended practices.

Reference: FFMPEG Lavfi

FFMPEG Tutorial: convert and stream your videos with HLS and VideoJS.

The following is a step-by-step guide in order to prepare and stream a file in HLS format, using FFMPEG, Bento4 and embedding it on the web with VideoJS.

Step #1: Optimize your file for streaming destination

Let’s take for example a source file in Apple ProRES (4444) format (master.mxf), at 1920×1080 (Full-HD), in Stereo (L+R) running at 24 FPS With this step we are going to create a file in h264 format, with same resolution and a final bitrate of 5.000 Kbps.

NOTE: All syntaxes and full explanations of this command can be found in the book FFMPEG – From Zero to Hero:

ffmpeg -i master.mxf -c:v h264 -crf 22 -tune film -profile:v main -level:v 4.0 -minrate 5000k -maxrate 5000k -bufsize 5000k -r 24 -keyint_min 24 -g 48 -sc_threshold 0 -c:a aac -b:a 128k -ac 2 -ar 44100 -pix_fmt yuv420p -movflags +faststart output.mp4 

Now, your “output.mp4” will have a key-frame interval of 2 seconds (as required by the majority of video platforms).

NOTE: In order to produce a standard HLS Package, please refer to the Apple’s HLS Authoring Specifications

Step #2: Segment and package for HLS Streaming

This step will require you to install Bento4 from Axiomatic Systems.

The command will be:

mp42hls output.mp4 --verbose --output-single-file

At this point you will end up with a folder in your path, named “output”, which will contains a “master.m3u8” file and a “media-1” folder, which contains all the segments in one unique file. Alternatively you can remove the option “–output-single-file” in order to produce separate segments of your video. You can then upload the entire “output” folder onto your server.

Step #3: Upload the Bento4’s generated files into your server

This step is pretty self-explanatory.

Step #4: Install VideoJS and embed the HLS video

VideoJS is the most popular open source HTML5 player framework, created by the guys at Brightcove.

The command to embed a responsive VideoJS player along with your HLS video is pretty simple. You have to insert the following code into your webpage, whether thru HTML or PHP:

<link href="https://vjs.zencdn.net/7.10.2/video-js.css" rel="stylesheet" />
<script src="https://vjs.zencdn.net/7.10.2/video.min.js"></script>
<video id="myvideo" class="video-js vjs-default-skin vjs-big-play-centered" controls
preload="auto" video controlslist="nodownload" width="100%" height="100%" poster="/poster.jpg"
data-setup='{"fluid":true,"controls": true, "autoplay": false, "preload": "auto"}'>
<source src="/output/stream.m3u8" type="application/x-mpegURL">

Please note that the above script will embed a VideoJS Player with the “Play” button centered (the default play button is set in the upper left corner), with the download feature disabled and the preload value set to “auto”. You will also need to set the path of your “poster.jpg” in order to display a poster frame for your player (the default poster is a black screen).

Optional Step: Enable CORS (Cross-Origin-Resource-Sharing) on your Server with .htaccess

If you are working with VideoJS, in some occasions you might encounter “CORS issues” by implementing the player into your website using simple HTML. As a result, your VideoJS player just won’t play any video at all, either by displaying an error message or by staying into an infinite buffering state.

CORS stands for Cross-Origin-Resource-Sharing, a safety mechanism that allows a server to indicate any other origins (domain, scheme, or port) than its own from which a browser should permit loading of resources.

To solve this problem, while there are many solutions to enabling CORS, the easiest one is by creating an HTACCESS rule, by editing (or creating) your .htaccess file on the server on both root directory AND the directory where your webpage will display the VideoJS player.

The following code has been tested and works in order to enable CORS, using an HTACCESS file:

# Handling Options for the CORS

RewriteRule ^(.*)$ $1 [L,R=204]

# Add Custom Headers

Header set X-Content-Type-Options "nosniff"
     Header set X-XSS-Protection "1; mode=block"
        # Always set these headers for CORS.
     Header always set Access-Control-Max-Age 1728000
     Header always set Access-Control-Allow-Origin: "https://www.YOURWEBSITE.com"
     Header always set Access-Control-Allow-Methods: "GET,POST,OPTIONS,DELETE,PUT"
     Header always set Access-Control-Allow-Headers: "DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,C$
     Header always set Access-Control-Allow-Credentials true

Final Result

Click here for a HTML Sample (Sample Footage Courtesy of Arri).

How to install FFMPEG 4.x

Instructions for MacOS, Linux and Windows.

MacOS X contains a great app called “Terminal” which is inside the Utility Folder.


The Terminal application on MacOS X it’s a standard shell with a pre-installed BASH version 3.

Please note: the latest MacOS X version use an extended version of BASH named ZSH (aka Zed Shell“). In order to use all the tools and commands described in this book i suggest you to switch from ZSH to BASH.

To do so, just type the word bash as soon as your Terminal will open, and press Enter. This will enable the BASH shell immediately. First thing to do with Terminal: choose or locate your default working directory.

By default Terminal will process your commands on the current USER directory path (aka your Home Directory).

To access your “Desktop” directory on MacOS X, for example, you can type the following command on Terminal:

cd ~/Desktop

The above command means “Change the Directory where i am located, to the user Desktop”. You just want to make sure that everything you will output from your Terminal will be easily located on your computer, once the processes are done.

For example: you might want to process a large batch of videos. In this case it will a good idea to can create a folder on your desktop and call it “processed”.

To create this folder you will type:

mkdir ~/Desktop/processed

To access this folder on your Desktop, within Terminal, you will type:

cd ~/Desktop/processed

While there are several ways to install FFMPEG on MacOS X or on Ubuntu, i suggest you to download and install one specific program called “HomeBrew” thru the Terminal Application.

Just open the Terminal Application on MacOS X and then paste the following code:

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"

The above mentioned code is a BASH command that basically says:

Please BASH run the command within the quotes so to download and install HomeBrew from the HomeBrew’s Website onto my computer.”

For the sake of simplicity, the above command won’t be analyzed word by word. A complete techcnical explanation of the command can be founded on the Additional Notes of my book and/or by visiting the Bash Reference Manual available at the GNU.org website.

Install a Basic FFMPEG version thru HomeBrew

After installing HomeBrew it’s time to install FFMPEG.

At the time of writing this article the latest version is 4.3.1.

Now type the following code:

brew install homebrew-ffmpeg/ffmpeg/ffmpeg

This might take 5 minutes or more.

After completion of the installation process, you might want to check if everything is fine with your installation of FFMPEG.

To do so, within your Terminal window, just type the following code:

ffmpeg -version

Your shell will display something like this:

If you see a similar screen, after running the above command, that means that the installation process of FFMPEG on MacOS X has completed!

Optional: Custom Installation of FFMPEG on MacOS X with Options

You might want to use an FFMPEG version that has all the protocols and libraries needed for a particular task.

For example, you might need to use a Decklink acquisition card from BlackMagic Design.

In such case you can check the options available on Brew by typing the following command:

brew info homebrew-ffmpeg/ffmpeg/ffmpeg

This will print out all the supported options.

In order to install FFMPEG with the desired options, you will need to type the command along with the desired option to install.

In the example of DeckLink support, you will have to install the Blackmagic Desktop Video and Driver libraries and then type the following command:

brew install homebrew-ffmpeg/ffmpeg/ffmpeg --with-decklink

As mentioned before FFMPEG can be installed in many ways, and with some options that can process many formats and standards created in the last 30 years. Amongst these formats there are patented formats, such as the common Mp3 format, which is a registered patent of the Fraunhofer Institute, which deals with licenses and authorized uses for commercial purposes.

For example: if you want to use a proprietary codec such as the AAC into an App or a commercial software, and embed FFMPEG as a component of your App, you might want to study this legal section of the FFMPEG website.

As per the Blackmagic Design option above described, if you need to use FFMPEG on a specific configuration or output a special format you may want to install a custom version of FFMPEG.

Let’s take an example: the AAC audio compression codec.

This audio codec can be used freely by calling the native AAC encoder option with the command -c:a aac (which means “Codec for Audio: AAC”). But you might need to use a special version of the AAC codec, such as the HE-AAC (High Efficiency Advance Audio Coding).

To use this patented format, which is yet another Fraunhofer Institute Patent, you will need to install FFMPEG with the libfdk_aac library.

To enable this option you will have to type:

brew install homebrew-ffmpeg/ffmpeg/ffmpeg --with-fdk-aac

From a quality standpoint the libfdk_aac option will produce superior results from the native FFMPEG’s AAC encoder. You might want to investigate further this point, by reading this FFMPEG Guideline for high-quality lossy audio encoding.

On MacOS X and with HomeBrew installed, you can check all the available install options of FFMPEG, by typing on your Terminal the following command:

brew options homebrew-ffmpeg/ffmpeg/ffmpeg

Installing FFMPEG 3.X on Linux (Ubuntu)

On Ubuntu releases you can open the Terminal Application which is already installed on the operating system.

To install FFMPEG, open Terminal and type the following set of 2 command:

sudo apt update&&sudo apt install ffmpeg

Please keep in mind that this Ubuntu version of FFMPEG might not be the same version installed with HomeBrew on a Mac OS system.

sudo = “Super User Do!”. This means you are asking the system

to perform actions as the main administrator, with full privileges. You might be asked to enter an Administrator Password before proceeding.

apt = Advance Package Tool. Is the most efficient and preferred way of managing software from the command line for Debian and Debian based Linux distributions like Ubuntu.

update = This will update all the software. This is the step that actually retrieves information about what packages can be installed on your Ubuntu system, including what updates to currently installed packages are available.

&& = this is used to chain commands together, such that the next command is run if and only if the preceding command exited without errors

apt install ffmpeg = this is the command used to install the package FFMPEG, available on the Advance Package Tool.

Installing FFMPEG 4.x on Ubuntu

Please note that this version of FFMPEG will install also the “non-free” option and licensed codecs, such as the libfkd_aac. Please refer to the above Custom Installation section for legal terms and conditions that might apply in your specific development case.

Installing FFMPEG 4.x on Ubuntu with all the Bells and Whistles

FFMPEG can be installed in many ways. As discussed earlier you might use a Mac, therefore you might want to install FFMPEG thru HomeBrew or by following the official instructions on how to compile FFMPEG with custom dependancies, available here:


Note: you might encounter an error message using SNAP package, similar to this one:

The command could not be located because '/snap/bin' is not included in the PATH environment variable.

PATH is an environment variable on Unix-like operating systems, DOS, OS/2, and Microsoft Windows, specifying a set of directories where executable programs are located.

If this is the case, you can edit a file called /etc/ environment and add /snap/bin in the list then restart your system.

For example:

nano /etc/enviroment

Then edit the file, by adding /snap/bin at the end of the list:

PATH="/usr/local/sbin:/usr/local/bin:/usr/ sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/ local/games:/snap/bin"

Then you can restart your system:

sudo reboot

Installing FFMPEG 4.x on Ubuntu with HomeBrew

If you want to install FFMPEG with the Homebrew Package Manager, you can do so by typing:

/bin/bash -c "$(curl -fsSL https://

And then by typing:

brew install homebrew-ffmpeg/ffmpeg/ffmpeg

If you need to install additional codec or protocols, you can follow the same instructions as above for MacOS X’s HomeBrew.

Installing FFMPEG on Windows

As per the other platforms, on Windows machines you can install FFMPEG in several ways as described before. Particularly with Windows 10 you can install a component named “Windows Subsystem for Linux”.

This can give you the possibility to install a full Ubuntu release. To use the formula described in this book I reccomend you to install Ubuntu 18.04 LTS or 20.04 LTS. For more information about installing BASH on Windows 10 please refer to this page:

https://docs.microsoft.com/en-us/windows/wsl/install- win10

Once installed the “Windows Subsystem for Linux” and a release of Ubuntu, you can then type the following line:

sudo apt-get install ffmpeg


Although FFMPEG can be installed on a very old machine even with no graphic card installed, much faster performances can be achieved with newer machines and one or more graphic cards. If you have a configuration of one or more GPUs you can also enable a specific option called “Hardware Acceleration” on FFMPEG, and achieve an even faster experience on some operations.

However, for the sake of simplicity, this specific options and all their variants won’t be covered on this book.

If you are interested in discovering and enabling the “Hardware Acceleration” option for your FFMPEG installation please take a look at the following article:


Approximately every 6 months the FFmpeg project makes a new major release. Between major releases point releases will appear that add important bug fixes but no new features.

Basic Definitions of FFMPEG

Welcome on the very first post of this Blog, which is dedicated to my first technical book “FFMPEG – From Zero to Hero“, available here.

In this post I will go thru some basic terms and basic concepts of FFMPEG, which by the way, stands for “Fast-Forward-Moving-Picture-Experts-Group”. FFMPEG is a creation of Fabrice Bellard and one of the fastest video and image processors on earth. Used and trusted by giants such as YouTube, Netflix or Vimeo, FFMPEG has been developed and refined since December 2000, and continues to be refined, expanded and revised.

Brief History

Just a brief note on the word “MPEG“: The Moving Picture Experts Group, MPEG, is a working group of authorities that was formed in 1988 by the standard organization ISO, The International Organization for Standardization, and IEC,the International Electrotechnical Commission, to set standards for audio and video compression and transmission. 

Since its establishment by Leonardo Chiariglione and Hiroshi Yasuda, the Moving Pictures Experts Group has made an indelible mark on the transition from analog to digital video.

What is FFMPEG?

FFMPEG is by definition a framework, which can be defined as a platform or a structure for developing software applications. FFMPEG is able to process pretty much anything that humans have created in the last 30 years or so, in terms of audio, video, data and pictures. 

It supports the most obscure old formats up to the cutting edge, no matter if they were designed by some standards committee, the community or a corporation.

It’s used with a “Command-Line-Interface” (or CLI) and you won’t be able to use it with a common Graphic-User-Interface (or GUI).

What FFMPEG can do for you

FFMPEG can do lots of things with video, images and audio in the fastest way possible.

It can convert, edit, rescale, extract, stream, post-process, equalize, colorize, segment, cut, copy, broadcast, squeeze, distort, and much much more.

If you ever wondered how the developers of YouTube or Vimeo cope with billions of video uploads or how Netflix processes its catalogue at scale or, again, if you want to discover how to create and develop your own video platform, FFMPEG is the right tool for you.

Basic Terminology

When you will use FFMPEG you will use a SHELL. What is a SHELL?

SHELL: Is a UNIX term for a user interface to the system: something that lets you communicate with the computer via the keyboard and the display thru direct instructions (Command Lines) rather than with a mouse and graphics and buttons. The shell’s job, then, is to translate the user’s command lines into operating system instructions. 

FFMPEG will work with BASH commands. What is BASH?

BASH: Bash is the shell, or command language interpreter, for Unix-like systems. The name is an acronym for the ‘Bourne-Again SHell’, a pun on Stephen Bourne, the author of the direct ancestor of the current Unix shell “sh“, which appeared in the 7th Edition Bell Labs Research version of Unix, in 1979.

FFMPEG’s use several CODECS to work.

CODEC: A codec is the combination of two words enCOder and DECoder. An encoder compress a source file with a particular algorithm: then a decoder can decompress and reproduce the resulting file.

Common examples of video codecs are: MPEG-1, MPEG-2, H.264 (aka AVC), H.265 (aka HEVC), H.266 (aka VVC), VP8, VP9, AV1, or audio codecs such as Mp3, AAC, Opus, Ogg Vorbis, HE-AAC, Dolby Digital, FLAC, ALAC.

What Encode means?

ENCODE: The process to compress a file so to enable a faster transmission of data.

What means to Decode?

DECODE: The function of a program or a device that translates encoded data into its original format.

What is a file container?

CONTAINER: Like a box that contains important objects, containers exist to allow multiple data streams, such as video, audio, subtitles and other data, to be embedded into a single file. Amongst popular containers there are:
MP4 (.mp4), MKV (.mkv), WEBM (.webm), MOV (.mov), MXF (.mxf), ASF (.asf), MPEG Transport Stream (.ts), CAF (Core Audio Format, .caf), WEBP (.webp).

What means bitrate?

BITRATE: Bitrate or data rate is the amount of data per second in the encoded video file, usually expressed in kilobits per second (kbps) or megabits per second (Mbps). 

The bitrate measurement is also applied to audio files.
An Mp3 file, for example, can reach a maximum bitrate of 320 kilobit per second, while a standard CD (non-compressed) audio track can have up to 1.411 kilobit per second.

A typical compressed h264 video in Full-HD has a bitrate in the range of 3.000 – 6.000 kbps, while a 4k video can reach a bitrate value up to 51.000 kbps. A non-compressed video format, such as the Apple ProRes format in 4K resolution, can reach a bitrate of 253.900 kbps and higher.

The Resolution is important!

RESOLUTION: Resolution defines the number of pixels (dots) that make up the picture on your screen.  

For any given screen size the more dots in the picture, the higher the resolution and the higher the overall quality of the picture.

TV resolution is often stated as the number of pixels or dots contained vertically in the picture. Each of these resolutions also has a number and a name associated with it.

For example: 480 is associated to SD (Standard Definition). 720 and 1080 are associated to HD (High-Definition), 2160 is associated to UHD (Ultra-High-Definition) and finally 4320 is associated to 8K UHD.

What does it means “Interlaced”?

INTERLACED FORMAT: It’s a technique invented in the  1920s to display a full picture by dividing it into 2 different set of lines: the even and the odd lines.
The even lines are called “even field”, while the odd lines are called the “odd field”. 

The even lines are displayed on the screen, then the odd  lines are displayed on the screen, each one every 1/60th of a second: both of these, even and odd fields, make up one video frame. This is one of the earliest video compression methods.

What does it means “Progressive” or “p”?

PROGRESSIVE: Progressive format refers to video that displays both the even and odd lines, meaning the entire video frame, at the same time . The letter “p” near to the resolution of a video, such as in 720p, 1080p, 2160p, stands for the same thing.

What is a Letterbox or Letterboxing?

LETTER BOX: Letterboxing is the practice of transferring film shot in a widescreen aspect ratio to standard-width video formats while preserving the content’s original aspect ratio. The resulting videographic image has black bars above and below it. LBX or LTBX are the identifying abbreviations for films and images so formatted.

What does it mean VOD or SVOD?

VOD/SVOD: Acronym for Video On Demand / Subscription-based Video On Demand.

What’s OTT stands for?

OTT: Abbreviation for “Over-the-top”. A video streaming service offered directly to the public thru an internet connection, rather than thru an antenna, cable or satellite.

What is streaming?

STREAMING: The process of delivering media thru small chunks of compress data and sent to a requesting device.

What’s RTMP stands for?

RTMP: Real-Time-Messaging-Protocol. A proprietary protocol originally developed by Macromedia, today Adobe.

What HLS means?

HLS: HTTP-Live-Streaming Protocol. Created by Apple for delivering video destined for Apple devices.

What’s DASH means in video streaming?  

DASH: Dynamic Adaptive Streaming over HTTP. 

This protocol can be defined as the open-source HLS.

What is a M3U8 file?

M3U8: A standard text file encoded in UTF-8 format and organized as a playlist of items with their location, to be reproduced in a particular sequence. 

What’s AV1?

AV1: Stands for AOMedia Video 1, which is an acronym for Alliance for Open Media Video 1, a codec developed in joint-venture by Amazon, Cisco, Google, Intel, Microsoft, Mozilla, Neflix, ARM, NVidia, Apple and AMD.

What’s BATCH Processing means?

BATCH PROCESSING: The act of processing a group of catalogued files.

What’s HDR stands for?

HDR: Acronym for High-Dynamic-Range. It is about recreating image realism from camera through postproduction to distribution and display.

What does it mean h264 in video?

h264: A standard of video compression defined by the Motion Expert Picture Group (MPEG)  and the International Telecommunication Union (ITU). It is also referred to h264/AVC (Advanced Video Coding).

What’s x264 means?

x264: a free software library and application for encoding video streams into the H.264/MPEG-4 AVC compression format.