Difference between revisions of "Ffmpeg"
Line 134: | Line 134: | ||
Note: This doesn't always work as you expect it. ffmpeg is always jumping around where the end of the file is, in my experience, so | Note: This doesn't always work as you expect it. ffmpeg is always jumping around where the end of the file is, in my experience, so | ||
don't be surprised if your extract starts earlier or later than you thought. | don't be surprised if your extract starts earlier or later than you thought. | ||
===Convert Portion of Video to GIF=== | |||
ffmpeg -i video.mp4 -ss 00:03:05 -t 00:00:05.0 output.gif | |||
===Get Image from Network Stream and Output to Remote Framebuffer=== | ===Get Image from Network Stream and Output to Remote Framebuffer=== |
Revision as of 19:52, 25 March 2023
ffmpeg is a set of video processing tools used by ZoneMinder to generate video files from the network camera streams.
What is FFFMPEG
One thing to know about ffmpeg is that it is versatile in what inputs and outputs it can use.
e.g.
you can input:
from the desktop screen x11grab from the framebuffer itself fbdev and /dev/fb0 from a network video stream http://ongoingstream.mjpeg or rtsp:// from just a udp socket udp://ipaddress:port from a file on the internet http://justafile.mp4 from a video on your local machine /directory/file from a pipe rgbledoutput > ffmpeg
and you can also output to most if not all of these locations.
Where is ffmpeg in Zoneminder?
- If you examine the source code, you will see zm_ffmpeg.cpp which uses libavcodec, which is library for the functions that the ffmpeg binary provides which you can wrap into a program (such as ZM). This is how ZM records when using only the ffmpeg method.
The binary was previously used, but I'm not sure if it's still used in 1.36+. However, the binary does have a use for testing streams.
Obtaining FFMPEG
You should first check your distribution's package manager. Aside from that you have the option of compiling from source, or downloading a binary, which are linked from the main ffmpeg website.
Using FFMPEG
Testing a Stream Path with FFMPEG
e.g.
$ ffmpeg -i rtsp://admin:password@192.168.1.64:554/video/1 output.mp4
If ffmpeg is successful it will output the encoding of the stream and the resolution. ffplay can also be used (if you are running a GUI such as X), and is easier in this case. But, if you are testing from a headless machine, use ffmpeg and output to a file.
$ ffplay rtsp://admin:password@192.168.1.64:554/video/1
A note on the RPI
The RPI has its own build of FFMPEG which includes support for the omx and mmal hardware peripherals. It is recommended to obtain it from the official RPI repos. Note that this provides hardware support for exporting, but not necessarily for recording videos (see above paragraphs).
FFMPEG Video Export Options
Ffmpeg is used in exporting events to downloadable video files. Exporting video is done using the zmvideo.pl script.
You can control the options that get passed to ffmpeg during the export process using 2 config options found in the Images tab of the options dialog.
FFMPEG_INPUT_OPTIONS
usually leave this empty
FFMPEG_OUTPUT_OPTIONS
In 1.36 these generally are not used. But for historical purposes: here are some possible settings:
To obtain a good quality export x264 based mp4 video file - the following example works...
-r 30 -vcodec libx264 -threads 2 -b 2000k -minrate 800k -maxrate 5000k
If you want as fast as possible h264(with some sacrifice in quality) you can try
-c:v libx264 -preset ultrafast
Examples
Output video to UDP socket
ffmpeg -i myvideo.mp4 -f h264 udp://127.0.0.1:12345
The -f is required and specifies the output format for the udp stream. This could easily be used with say /dev/video0 (webcam) to restream from a small SBC (although you would probably have better luck with mjpeg-streamer, as this solution might not handle disconnects.).
Download Only Part of a Video
ffmpeg -t 5 -i input video_output_first_5_seconds.mp4
Single Screenshot of a Video
ffmpeg -ss 87.52 -i /mnt/zm1/8/2023-01-26/5643528/5643528-video.mp4 -frames:v 1 /mnt/zm1/8/2023-01-26/5643528/01145-capture.jpg
(from forum, this is what ZM uses to make thumbnails for the timeline)
Joining Jpegs
ffmpeg -framerate 5 -i %05d-capture.jpg output.mp4
Use ffmpeg to concatenate jpeg images stored by zoneminder to an mp4. Note that %05d-capture.jpg here means, escape (%), search for numbers (0), search for 5 of them, increment numbers d, then the rest is a string common to all jpg files. Edit framerate as needed. This is the format used by Zoneminder to store jpegs.
(Reference: [1])
While the above would be used for Zoneminder, a non-ZM solution might use the glob feature of Ffmpeg. (note that you must pass the -pattern_type glob, you can't simple use an asterisk on its own)
ffmpeg -r 1 -pattern_type glob -i 'test_*.jpg' -c:v libx264 out.mp4
https://superuser.com/questions/624567/how-to-create-a-video-from-images-using-ffmpeg
Combining Multiple Videos
Use ffmpeg to concatenate a number of audio / video files.
first put all desired files into a list
for f in ./*.mp4; do echo "file '$f'" >> mylist.txt; done
combine files using concat filter
ffmpeg -f concat -safe 0 -i mylist.txt -c copy output.mp4
Note that special characters and spaces can be troublesome.
(Reference: [2]
Extract Portion of Video/Audio
ffmpeg -i sample.avi -ss 00:03:05 -t 00:00:45.0 -q:a 0 -map a sample.mp3
Use the -ss option to specify the starting timestamp, and the -t option to specify the encoding duration, eg from 3 minutes and 5 seconds in for 45 seconds. The timestamps need to be in HH:MM:SS.xxx format or in seconds. If you don't specify the -t option it will go to the end.
Ref:https://stackoverflow.com/questions/9913032/how-can-i-extract-audio-from-video-with-ffmpeg
Note: This doesn't always work as you expect it. ffmpeg is always jumping around where the end of the file is, in my experience, so don't be surprised if your extract starts earlier or later than you thought.
Convert Portion of Video to GIF
ffmpeg -i video.mp4 -ss 00:03:05 -t 00:00:05.0 output.gif
Get Image from Network Stream and Output to Remote Framebuffer
#!/bin/bash ffmpeg -i http://user:password@ipaddress/videostream -frames:v 1 -y snapshot.jpg ffmpeg -i snapshot.jpg -s 320x240 -f rawvideo -pix_fmt rgb565 -vcodec rawvideo -r 1 -y output.raw #scp doesn't work #scp output.raw root@ipaddress:/dev/fb0 #trick: # dd if=/dev/mtd0 | ssh me@myhost "dd of=mtd0.img" #https://unix.stackexchange.com/questions/189722/can-you-scp-a-device-file dd if=./output.raw | ssh root@ipaddress "dd of=/dev/fb0"
This is an example of taking an image from a network ip camera and outputting to the framebuffer of a monitor/lcd. The above is not optimized, and is meant as a demonstration. It might be used, e.g. for an RPI with a tft lcd attached. Note that the pixel format and resolution above are for a 16bits per pixel tft that is 320x240. Your framebuffer will likely be different. You can find your parameters by taking a snapshot from the framebuffer with ffmpeg and looking at the output.
Overlay second video on a video stream
ffmpeg -i "rtsp://user:pass@ipaddress:554/videostream" -i myvideo.mp4 -filter_complex overlay -f h264 udp://127.0.0.1:12345 ffplay udp://localhost:12345 ffmpeg -i "rtsp://user:pass@ipaddress:554/videostream" -f image2 -stream_loop -1 -i overlay.png -filter_complex overlay -f h264 udp://127.0.0.1:12345
Here's an example of overlaying a 2nd video on a live stream from an ip camera, and the ffplay is viewing the stream. Here the myvideo.mp4 is smaller in resolution than the ip camera stream. You can also of course overlay images, including transparent images that update. The 3rd example is the proper syntax for this.
Debugging Media streams
ffmpeg -loglevel [info,debug,etc] -i input output.mp4 ffmpeg -debug mmco -i rtsp://user:password@ipaddress:554/streampath output.mp4
The first is the standard method to debug. The second is for certain parts of ffmpeg. These may be useful, if you are trying to determine why a camera is failing to connect properly to ffmpeg. (reference book: FFMPEG Basics). Note that the -debug flag has a number of parameters other than mmco (which is only valid for h264) that can be passed. A few other possible values are buffers, pict, bitstream, rc.
Framebuffer Notes
Displaying to framebuffer / linux see also: using a spi connected lcd example: https://elinux.org/MiniDisplay_Cape https://github.com/jeidon/cfa_bmp_loader/blob/master/sample-code/main.c essentially: init code for hardware, then write via spi. you can have multiple spi screens, possibly (if you have multiple spi bus').
see general notes on framebuffer writing here: https://elinux.org/RPi_Framebuffer
this is interesting. you can get a usb to lcd (16 character) and just write to it as a serial port. http://web.archive.org/web/20211207191549/https://hamvoip.org/hamradio/USBLCD/
there may be higher resolution screens, but... more research needed. this one looks limited. although easier to use. https://www.cnx-software.com/2022/04/29/turing-smart-screen-a-low-cost-3-5-inch-usb-type-c-information-display/ maybe also consider an uno to a tft shield.
more general notes on writing to fb https://web.archive.org/web/20210512060006/https://avikdas.com/2019/01/23/writing-gui-applications-on-raspberry-pi-without-x.html
you can also ofc, read from the framebuffer, not just write.
sudo ffmpeg -f fbdev -framerate 1 -i /dev/fb0 -frames:v 1 screenAA3.jpeg
https://stackoverflow.com/questions/71549386/ffmpeg-output-to-framebuffer-fbdev-raspberry-pi-4
See Also
- Zmodopipe - Some examples of ffmpeg reading from a pipe, outputting to a JPEG file, and also ffserver.
- FFMPEG Basics by Frantisek Korbel. This book is based around command line usage, and does not necessarily go into detail on the source code.
- https://johnvansickle.com/ffmpeg/ For ready-made binaries
- http://web.archive.org/web/20221123101906/https://img.ly/blog/ultimate-guide-to-ffmpeg/ - tutorial on ffmpeg
- https://gist.github.com/cbarraco/f6cb40e3f5eb1f2733b5 - Ffmpeg screen sharing (ad-hoc vnc)