Ffmpeg

From ZoneMinder Wiki
Jump to navigationJump to search

ffmpeg is a set of video processing tools used by ZoneMinder to generate video files from the network camera streams.

What is FFFMPEG

One thing to know about ffmpeg is that it is versatile in what inputs and outputs it can use.


e.g.

you can input:

from the desktop screen
x11grab


from the framebuffer itself
fbdev and /dev/fb0


from a network video stream
http://ongoingstream.mjpeg or rtsp://


from just a udp socket
udp://ipaddress:port


from a file on the internet
http://justafile.mp4


from a video on your local machine
/directory/file


from a pipe 
rgbledoutput > ffmpeg


and you can also output to most if not all of these locations.

Where is ffmpeg in Zoneminder?

  • If you examine the source code, you will see zm_ffmpeg.cpp which uses libavcodec, which is library for the functions that the ffmpeg binary provides which you can wrap into a program (such as ZM). This is how ZM records when using only the ffmpeg method.

The binary was previously used, but I'm not sure if it's still used in 1.36+. However, the binary does have a use for testing streams.


Obtaining FFMPEG

You should first check your distribution's package manager. Aside from that you have the option of compiling from source, or downloading a binary, which are linked from the main ffmpeg website.

Using FFMPEG

Testing a Stream Path with FFMPEG

e.g.

$ ffmpeg -i rtsp://admin:password@192.168.1.64:554/video/1 output.mp4

If ffmpeg is successful it will output the encoding of the stream and the resolution. ffplay can also be used (if you are running a GUI such as X), and is easier in this case. But, if you are testing from a headless machine, use ffmpeg and output to a file.

$ ffplay rtsp://admin:password@192.168.1.64:554/video/1

A note on the RPI

The RPI has its own build of FFMPEG which includes support for the omx and mmal hardware peripherals. It is recommended to obtain it from the official RPI repos. Note that this provides hardware support for exporting, but not necessarily for recording videos (see above paragraphs).

FFMPEG Video Export Options

Ffmpeg is used in exporting events to downloadable video files. Exporting video is done using the zmvideo.pl script.

You can control the options that get passed to ffmpeg during the export process using 2 config options found in the Images tab of the options dialog.

FFMPEG_INPUT_OPTIONS

usually leave this empty

FFMPEG_OUTPUT_OPTIONS

In 1.36 these generally are not used. But for historical purposes: here are some possible settings:

To obtain a good quality export x264 based mp4 video file - the following example works...

-r 30 -vcodec libx264 -threads 2 -b 2000k -minrate 800k -maxrate 5000k

If you want as fast as possible h264(with some sacrifice in quality) you can try

-c:v libx264 -preset ultrafast

Examples

Output video to UDP socket

ffmpeg -i myvideo.mp4 -f h264 udp://127.0.0.1:12345

The -f is required and specifies the output format for the udp stream. This could easily be used with say /dev/video0 (webcam) to restream from a small SBC (although you would probably have better luck with mjpeg-streamer, as this solution might not handle disconnects.).

Download Only Part of a Video

ffmpeg -t 5 -i input video_output_first_5_seconds.mp4

Joining Jpegs

ffmpeg -framerate 5 -i %05d-capture.jpg output.mp4

Use ffmpeg to concatenate jpeg images stored by zoneminder to an mp4. Note that %05d-capture.jpg here means, escape (%), search for numbers (0), search for 5 of them, increment numbers d, then the rest is a string common to all jpg files. Edit framerate as needed. This is the format used by Zoneminder to store jpegs.

(Reference: [1])

Combining Multiple Videos

Use ffmpeg to concatenate a number of audio / video files.

first put all desired files into a list

for f in ./*.mp4; do echo "file '$f'" >> mylist.txt; done

combine files using concat filter

ffmpeg -f concat -safe 0 -i mylist.txt -c copy output.mp4

Note that special characters and spaces can be troublesome.

(Reference: [2]

Extract Portion of Video/Audio

ffmpeg -i sample.avi -ss 00:03:05 -t 00:00:45.0 -q:a 0 -map a sample.mp3

Use the -ss option to specify the starting timestamp, and the -t option to specify the encoding duration, eg from 3 minutes and 5 seconds in for 45 seconds. The timestamps need to be in HH:MM:SS.xxx format or in seconds. If you don't specify the -t option it will go to the end.

Ref:https://stackoverflow.com/questions/9913032/how-can-i-extract-audio-from-video-with-ffmpeg

Note: This doesn't always work as you expect it. ffmpeg is always jumping around where the end of the file is, in my experience, so don't be surprised if your extract starts earlier or later than you thought.

Get Image from Network Stream and Output to Remote Framebuffer

#!/bin/bash 
ffmpeg -i http://user:password@ipaddress/videostream -frames:v 1 -y snapshot.jpg
ffmpeg -i snapshot.jpg -s 320x240 -f rawvideo -pix_fmt rgb565 -vcodec rawvideo -r 1 -y  output.raw
#scp doesn't work
#scp output.raw root@ipaddress:/dev/fb0
#trick: # dd if=/dev/mtd0 | ssh me@myhost "dd of=mtd0.img"
#https://unix.stackexchange.com/questions/189722/can-you-scp-a-device-file
dd if=./output.raw | ssh root@ipaddress "dd of=/dev/fb0"

This is an example of taking an image from a network ip camera and outputting to the framebuffer of a monitor/lcd. The above is not optimized, and is meant as a demonstration. It might be used, e.g. for an RPI with a tft lcd attached. Note that the pixel format and resolution above are for a 16bits per pixel tft that is 320x240. Your framebuffer will likely be different. You can find your parameters by taking a snapshot from the framebuffer with ffmpeg and looking at the output.

Overlay second video on a video stream

 ffmpeg -i "rtsp://user:pass@ipaddress:554/videostream"  
 -i myvideo.mp4 -filter_complex overlay -f h264 udp://127.0.0.1:12345

 ffplay udp://localhost:12345

 ffmpeg -i "rtsp://user:pass@ipaddress:554/videostream" -f image2
 -stream_loop -1 -i overlay.png -filter_complex overlay -f h264 udp://127.0.0.1:12345

Here's an example of overlaying a 2nd video on a live stream from an ip camera, and the ffplay is viewing the stream. Here the myvideo.mp4 is smaller in resolution than the ip camera stream. You can also of course overlay images, including transparent images that update. The 3rd example is the proper syntax for this.

Debugging Media streams

ffmpeg -loglevel [info,debug,etc] -i input output.mp4 
ffmpeg -debug mmco -i rtsp://user:password@ipaddress:554/streampath output.mp4

The first is the standard method to debug. The second is for certain parts of ffmpeg. These may be useful, if you are trying to determine why a camera is failing to connect properly to ffmpeg. (reference book: FFMPEG Basics). Note that the -debug flag has a number of parameters other than mmco (which is only valid for h264) that can be passed. A few other possible values are buffers, pict, bitstream, rc.

See Also

  • Zmodopipe - Some examples of ffmpeg reading from a pipe, outputting to a JPEG file, and also ffserver.
  • FFMPEG Basics by Frantisek Korbel. This book is based around command line usage, and does not necessarily go into detail on the source code.