This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
ffmpeg [2022-05-27 15:25:01] mi [Frames to animated webp] |
ffmpeg [2025-03-20 19:39:18] (current) mi [Add pseudo-timecode] |
||
---|---|---|---|
Line 4: | Line 4: | ||
ffmpeg -i "$in" | ffmpeg -i "$in" | ||
+ | |||
+ | ==webm to mp4 | ||
+ | |||
+ | Example: | ||
+ | |||
+ | <code bash> | ||
+ | ffmpeg -i "$in" -tune film -crf 20 -preset slow -b:a 192k -movflags +faststart "${in%%.webm}.mp4" | ||
+ | </code> | ||
==h264 | ==h264 | ||
Line 19: | Line 27: | ||
-maxrate 6M -bufsize 6M | -maxrate 6M -bufsize 6M | ||
</code> | </code> | ||
+ | |||
+ | Or set an average bitrate, without using ''-crf'' (see [[https://trac.ffmpeg.org/wiki/Limiting%20the%20output%20bitrate|Limiting the output bitrate]]) | ||
+ | |||
+ | |||
+ | <code bash> | ||
+ | -b:v 4M -maxrate 6M -bufsize 10M | ||
+ | </code> | ||
+ | |||
==prores | ==prores | ||
* Prores proxy: -profile:v 0 | * Prores proxy: -profile:v 0 | ||
Line 45: | Line 61: | ||
-vf "scale=iw/2:-2" # same (half width and height), but height divisible by 2 | -vf "scale=iw/2:-2" # same (half width and height), but height divisible by 2 | ||
etc. | etc. | ||
+ | </code> | ||
+ | |||
+ | ==Deinterlace | ||
+ | |||
+ | See https://ffmpeg.org/ffmpeg-filters.html#yadif : | ||
+ | |||
+ | > yadif=mode:parity:deint | ||
+ | > defaults: 0(send_frame=one frame for each frame) : -1(auto=auto detection of field order) : 0(all=all frames) | ||
+ | |||
+ | <code bash> | ||
+ | -vf "yadif=0:-1:0" | ||
</code> | </code> | ||
Line 76: | Line 103: | ||
ffmpeg -i input.flv -vf "select='eq(pict_type,PICT_TYPE_I)'" -vsync vfr thumb%04d.png | ffmpeg -i input.flv -vf "select='eq(pict_type,PICT_TYPE_I)'" -vsync vfr thumb%04d.png | ||
+ | |||
+ | Find all .mov files under some directory, and extract a single Jpeg: | ||
+ | |||
+ | <code bash> | ||
+ | pos=2 # get frame at $pos seconds from start of file | ||
+ | search_in=/mnt/x/y | ||
+ | ext="mov" | ||
+ | |||
+ | find "$search_in" -type f -name "*.$ext" -not -name ".*" \ | ||
+ | | while read f; do | ||
+ | n=$(basename "$f") | ||
+ | [ -f "$n" ] && n="$n-$RANDOM" | ||
+ | ffmpeg -nostdin -hide_banner -loglevel error -ss $pos -i "$f" \ | ||
+ | -r 1 -vframes 1 -f image2 "$n.jpg" \ | ||
+ | && echo "OK $n.jpg" | ||
+ | done | ||
+ | </code> | ||
Line 90: | Line 134: | ||
See also ffmpeg's [[https://trac.ffmpeg.org/wiki/Create%20a%20thumbnail%20image%20every%20X%20seconds%20of%20the%20video|Create a thumbnail image every X seconds of the video]]. | See also ffmpeg's [[https://trac.ffmpeg.org/wiki/Create%20a%20thumbnail%20image%20every%20X%20seconds%20of%20the%20video|Create a thumbnail image every X seconds of the video]]. | ||
- | ==Animated webp | + | ==Images to video |
+ | |||
+ | With numbered frames, the numbering must start at 0: "frame0000.png", "frame0001.png", "frame0002.png", ... | ||
+ | ===Animated webp | ||
Frames to animated .webp : | Frames to animated .webp : | ||
Line 105: | Line 152: | ||
</code> | </code> | ||
+ | ===Jpegs to mp4 | ||
- | ==Audio AAC | + | 1 .jpg image = 1 frame : |
<code bash> | <code bash> | ||
- | -c:a aac -b:a 192k | + | in="image%03d.jpg" # image000.jpg to image999.jpg |
+ | out="image-to-video.mp4" | ||
+ | ffmpeg -framerate 25 -i "$in" -c:v libx264 -crf 18 -pix_fmt yuv420p "$out" | ||
</code> | </code> | ||
- | ==faststart | + | To start on a specific numbered frame, use ''-start_number''. Example staring at "image150.jpg": |
- | Move "moov" atom to start of file (https://ffmpeg.org/ffmpeg-formats.html#Options-8). (Implies file copy after encoding) | + | |
- | <code>-movflags faststart | + | <code bash> |
+ | in="image%03d.jpg" # image000.jpg to image999.jpg | ||
+ | out="image-to-video.mp4" | ||
+ | ffmpeg -start_number 150 framerate 25 -i "$in" -c:v libx264 -crf 18 -pix_fmt yuv420p "$out" | ||
</code> | </code> | ||
- | ==Remap 2 mono audio to stereo | + | Slideshow with 1 .jpg = 1 second : |
- | I think that since ''-map'' is used, ''-map:v'' must be added too or the output is without video. | + | |
- | <code bash>-filter_complex "[0:a:0][0:a:1]amerge,channelmap=channel_layout=stereo[st]" -map 0:v -map "[st]" | + | <code bash> |
+ | fps=25 | ||
+ | out="slideshow-glob.mp4" | ||
+ | ffmpeg -framerate 1 -pattern_type glob -i '*.jpeg' -c:v libx264 -crf 20 -g $fps -pix_fmt yuv420p -r $fps "$out" | ||
</code> | </code> | ||
- | ==Change framerate | + | If the size of the frames is uneven (eg. 512x459), it needs to be set to an even size, or there will be this error: |
- | ''-r 25'' or ''-vf "fps=fps=25"'' | + | > ''height not divisible by 2 (512x459)'' |
+ | > ''Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height'' | ||
- | Example to check what it does: | + | In this case, specify the size. For example: ''-s 512x460'' |
- | ffmpeg -f lavfi -i testsrc=duration=10:size=854x480:rate=60 \ | ||
- | -vf "drawtext=text=%{n}:fontsize=72:r=60:x=(w-tw)/2: y=h-(2*lh):fontcolor=white:box=1:boxcolor=0x00000099" test.mp4 | ||
- | == FLAC | + | ==Audio |
- | === FLAC to MP3 | + | ===Audio AAC |
+ | <code bash> | ||
+ | -c:a aac -b:a 192k | ||
+ | </code> | ||
+ | |||
+ | ===AC3 5.1 to Stereo | ||
+ | |||
+ | Using defaults: | ||
+ | |||
+ | ffmpeg -i input.ac3 -ac 2 output.ac3 | ||
+ | |||
+ | Or manual: | ||
+ | |||
+ | ... -af "pan=stereo|FL < 1.0*FL + 0.707*FC + 0.707*BL|FR < 1.0*FR + 0.707*FC + 0.707*BR" ... | ||
+ | |||
+ | See also: https://forum.videohelp.com/threads/404278-Ac3-5-1-audio-track-convert-to-stereo | ||
+ | |||
+ | ===Remap 2 mono audio to stereo | ||
+ | I think that since ''-map'' is used, ''-map:v'' must be added too or the output is without video. | ||
+ | |||
+ | <code bash>-filter_complex "[0:a:0][0:a:1]amerge,channelmap=channel_layout=stereo[st]" -map 0:v -map "[st]" | ||
+ | </code> | ||
+ | |||
+ | === FLAC | ||
+ | |||
+ | ==== FLAC to MP3 | ||
<code bash> | <code bash> | ||
Line 152: | Line 230: | ||
The metadata from FLAC is preserved by default, and written to a ID3v2.4 header. To write an ID3v2.3 header instead, add the ''-id3v2_version 3'' option. | The metadata from FLAC is preserved by default, and written to a ID3v2.4 header. To write an ID3v2.3 header instead, add the ''-id3v2_version 3'' option. | ||
- | === WAV to FLAC | + | ==== WAV to FLAC |
Can be done with ''flac'' instead of ffmpeg: | Can be done with ''flac'' instead of ffmpeg: | ||
Line 162: | Line 240: | ||
</code> | </code> | ||
- | + | === Detect silence | |
- | == Detect silence | + | ==== volumedetect |
- | === volumedetect | + | |
This was slow? 330 mn. for a 167 mn. / 600GB file with 16 audio channels. Not slow on 30GB 16channels quicktime. But doesn't separate channels! | This was slow? 330 mn. for a 167 mn. / 600GB file with 16 audio channels. Not slow on 30GB 16channels quicktime. But doesn't separate channels! | ||
time ffmpeg -i "$in" -map 0:a -af "volumedetect" -f null - 2>&1 | tee "$in-volumedetect.txt" | time ffmpeg -i "$in" -map 0:a -af "volumedetect" -f null - 2>&1 | tee "$in-volumedetect.txt" | ||
- | === astats | + | ==== astats |
ffmpeg -i "$in" -vn -sn -dn -map 0:a -af "astats=measure_overall=none" -f null - 2>&1 | ffmpeg -i "$in" -vn -sn -dn -map 0:a -af "astats=measure_overall=none" -f null - 2>&1 | ||
See also https://ffmpeg.org/ffmpeg-filters.html#toc-ebur128-1 | See also https://ffmpeg.org/ffmpeg-filters.html#toc-ebur128-1 | ||
- | === EBU stats | + | ==== EBU stats |
For a file with 12 mono audio tracks: | For a file with 12 mono audio tracks: | ||
Line 180: | Line 257: | ||
- | == Add pseudo-timecode | + | ==Delay track |
+ | To add a delay to the start of an input, use ''-itsoffset offset'' before the input file. | ||
+ | |||
+ | See [[https://superuser.com/a/983153/53547|this answer]] for [[https://superuser.com/questions/982342/|In ffmpeg, how to delay only the audio of a .mp4 video without converting the audio?]] | ||
+ | |||
+ | For example, to merge an audio and a video file, but having the audio start 440 ms. later (11 frames at 25 fps): | ||
+ | |||
+ | ffmpeg -i "$video_file" -itsoffset 0.440 -i "$audio_file" -map 0:v -map 1:a -c copy "ff-resync-video_audio.mp4" | ||
+ | ===Syntax for time duration | ||
+ | ''[-][HH:]MM:SS[.m...]'' or ''[-]S+[.m...][s|ms|us]'' | ||
+ | See https://ffmpeg.org/ffmpeg-utils.html#time-duration-syntax | ||
+ | |||
+ | |||
+ | ==faststart | ||
+ | Move "moov" atom to start of file (https://ffmpeg.org/ffmpeg-formats.html#Options-8). (Implies file copy after encoding) | ||
+ | |||
+ | <code>-movflags faststart | ||
+ | </code> | ||
+ | |||
+ | ==Change framerate | ||
+ | |||
+ | ''-r 25'' or ''-vf "fps=fps=25"'' | ||
+ | |||
+ | Example to check what it does: | ||
+ | |||
+ | ffmpeg -f lavfi -i testsrc=duration=10:size=854x480:rate=60 \ | ||
+ | -vf "drawtext=text=%{n}:fontsize=72:r=60:x=(w-tw)/2: y=h-(2*lh):fontcolor=white:box=1:boxcolor=0x00000099" test.mp4 | ||
+ | |||
+ | |||
+ | == Timecode | ||
+ | === Change existing timecode | ||
+ | |||
+ | f=OriginalFile.mxf | ||
+ | newtc="00:00:00:00" | ||
+ | out=NewTC_File.mxf | ||
+ | ffmpeg -i "$f" -map 0:v -map 0:a -codec copy -movflags use_metadata_tags -timecode "$newtc" "$out" | ||
+ | |||
+ | === Add pseudo-timecode | ||
Example with Fuji .MOV files, to add the "Create Date" time as a timecode: | Example with Fuji .MOV files, to add the "Create Date" time as a timecode: | ||
Line 207: | Line 321: | ||
done | done | ||
</code> | </code> | ||
+ | |||
+ | If the exiftool command gives the error "End of processing at large atom (LargeFileSupport not enabled)", add the ''-api largefilesupport=1'' option so the exiftool line becomes: | ||
+ | |||
+ | <code bash>t=$(exiftool -api largefilesupport=1 -CreateDate "$f" | awk '{print $NF}');</code> | ||
+ | |||
+ | For .wav files from a Zoom, instead of ''-CreateDate'', use ''-DateTimeOriginal'' : | ||
+ | <code bash>t=$(exiftool -DateTimeOriginal "$f" | awk '{print $NF}'); tc="$t:00";</code> | ||
==Diff | ==Diff | ||
Line 240: | Line 361: | ||
$ ffmpeg -i "$filename" -map 0:a -codec copy -hide_banner -loglevel warning -f md5 - | $ ffmpeg -i "$filename" -map 0:a -codec copy -hide_banner -loglevel warning -f md5 - | ||
- | $ ffmpeg -i input.mp4 -map 0 -c copy -f streamhash -hash md5 - | + | $ ffmpeg -i "$filename" -map 0 -c copy -f streamhash -hash md5 - |
... | ... | ||
0,v,MD5=50224fec84bc6dfde90d742bcf1d2e01 | 0,v,MD5=50224fec84bc6dfde90d742bcf1d2e01 | ||
1,a,MD5=1d2a32ed72798d66e0110bd02df2be65 | 1,a,MD5=1d2a32ed72798d66e0110bd02df2be65 | ||
+ | |||
+ | $ ffmpeg -i "$f" -map 0 -c copy -f streamhash -hash md5 "$f.stream.md5" | ||
See also: | See also: | ||
Line 260: | Line 383: | ||
# or | # or | ||
-hide_banner -loglevel info -stats</code> | -hide_banner -loglevel info -stats</code> | ||
+ | |||
+ | ===Edit metadata=== | ||
+ | |||
+ | See [[https://superuser.com/questions/834244/how-do-i-name-an-audio-track-with-ffmpeg/835069|How do I name an audio track with ffmpeg]]: | ||
+ | |||
+ | ffmpeg -i input.mp4 -map 0 -c copy -metadata:s:a:0 title="One" -metadata:s:a:1 title="Two" -metadata:s:a:0 language=eng -metadata:s:a:1 language=spa output.mp4 | ||
==Examples | ==Examples |