Difference between revisions of "Ffmpeg"
(Configuration, Compilation, and Installation of ffmpeg) |
|||
(69 intermediate revisions by 3 users not shown) | |||
Line 1: | Line 1: | ||
ffmpeg is a set of video processing tools used by ZoneMinder to generate | ffmpeg is a set of video processing tools used by ZoneMinder to generate video files from the network camera streams. | ||
== What is FFFMPEG == | |||
One thing to know about ffmpeg | |||
is that it is versatile in what | |||
inputs and outputs it can use. | |||
e.g. | |||
you can input: | |||
<pre> | |||
from the desktop screen (there's a couple of these. see the wiki) | |||
x11grab | |||
from the framebuffer itself | |||
fbdev and /dev/fb0 | |||
from a network video stream | |||
http://ongoingstream.mjpeg or rtsp:// | |||
from a udp socket | |||
udp://ipaddress:port | |||
from a video on your local machine | |||
/directory/file | |||
from a file on the internet | |||
http://justafile.mp4 | |||
from a pipe | |||
rgbledoutput > ffmpeg | |||
</pre> | |||
and you can also output to | |||
most of these locations. More information is found on the ffmpeg wiki. https://trac.ffmpeg.org/wiki/ | |||
===Where is ffmpeg in Zoneminder?=== | |||
See: https://forums.zoneminder.com/viewtopic.php?t=32450 | |||
<pre> | |||
We only use the ffmpeg executable when generating thumbnails and still frame images from the saved mp4. | |||
For encoding, we use the LIBRARIES so, you would need to alter the LD_LIBRARY_PATH. | |||
encoding works fine on my old nvidia hardware using standard ubuntu packages. Or at least it did the last time I checked. | |||
hwaccel should be of great benefit in ENCODING. It is not useful for decoding at this time. | |||
</pre> | |||
== Obtaining FFMPEG == | |||
You should first check your distribution's package manager. Aside from that you have the option of compiling from source, or downloading a binary, which are linked from the main ffmpeg website. | |||
==Using FFMPEG== | |||
===Testing a Stream Path with FFMPEG=== | |||
e.g. | |||
$ ffmpeg -i rtsp://admin:password@192.168.1.64:554/video/1 output.mp4 | |||
If ffmpeg is successful it will output the encoding of the stream and the resolution. ffplay can also be used (if you are running a GUI such as X), and is easier in this case. But, if you are testing from a headless machine, use ffmpeg and output to a file. | |||
$ ffplay rtsp://admin:password@192.168.1.64:554/video/1 | |||
=== A note on the RPI === | |||
The RPI has its own build of FFMPEG which includes support for the omx and mmal hardware peripherals. It is recommended to obtain it from the official RPI repos. Note that this provides hardware support for exporting, but not necessarily for recording videos (see above paragraphs). (last checked 2020 or so). | |||
== FFMPEG Video Export Options == | |||
Ffmpeg is used in exporting events to downloadable video files. Exporting video is done using the [http://www.zoneminder.com/wiki/index.php?title=Special%3ASearch&search=zmvideo.pl&go=Go zmvideo.pl] script. | |||
You can control the options that get passed to ffmpeg during the export process using 2 config options found in the Images tab of the options dialog. | |||
=== FFMPEG_INPUT_OPTIONS === | |||
usually leave this empty | |||
=== FFMPEG_OUTPUT_OPTIONS === | |||
In 1.36 these generally are not used. But for historical purposes: here are some possible settings: | |||
To obtain a good quality export x264 based mp4 video file - the following example works... | |||
<code>-r 30 -vcodec libx264 -threads 2 -b 2000k -minrate 800k -maxrate 5000k</code> | |||
If you want as fast as possible h264(with some sacrifice in quality) you can try | |||
<code>-c:v libx264 -preset ultrafast </code> | |||
==Examples== | |||
===Output video to UDP socket=== | |||
ffmpeg -i myvideo.mp4 -f h264 udp://127.0.0.1:12345 | |||
The -f is required and specifies the output format for the udp stream. | |||
This could easily be used with say /dev/video0 (webcam) to restream from | |||
a small SBC (although you would probably have better luck with mjpeg-streamer, | |||
as this solution might not handle disconnects.). | |||
===Download Only Part of a Video=== | |||
ffmpeg -t 5 -i input video_output_first_5_seconds.mp4 | |||
===Single Screenshot of a Video=== | |||
ffmpeg -ss 87.52 -i /mnt/zm1/8/2023-01-26/5643528/5643528-video.mp4 -frames:v 1 /mnt/zm1/8/2023-01-26/5643528/01145-capture.jpg | |||
(from forum, this is what ZM uses to make thumbnails for the timeline) | |||
===Single Stream Screenshot=== | |||
For live stream images it's possible to do: | |||
zmu -m <monitor #> -i -U <username> -P <password> | |||
which will output to Monitor##.jpg. This uses the resolution set in Zoneminder. | |||
To grab the image direct from the camera it would be something like: | |||
ffmpeg -i http://user:password@ipaddress/videostream -frames:v 1 -y snapshot.jpg | |||
===Joining Jpegs=== | |||
ffmpeg -framerate 5 -i %05d-capture.jpg output.mp4 | |||
Use ffmpeg to concatenate jpeg images stored by zoneminder to an mp4. Note that %05d-capture.jpg here means, escape ('''%'''), | |||
search for numbers ('''0'''), search for '''5''' of them, increment numbers '''d''', then the rest is a string common to all jpg files. Edit framerate as needed. This is the format used by Zoneminder to store jpegs. | |||
(Reference: [https://askubuntu.com/questions/610903/how-can-i-create-a-video-file-from-a-set-of-jpg-images]) | |||
While the above would be used for Zoneminder, a non-ZM solution might use the glob feature of Ffmpeg. (note that you must pass the -pattern_type glob, you can't simple use an asterisk on its own) | |||
ffmpeg -r 1 -pattern_type glob -i 'test_*.jpg' -c:v libx264 out.mp4 | |||
https://superuser.com/questions/624567/how-to-create-a-video-from-images-using-ffmpeg | |||
===Joining Videos=== | |||
Use ffmpeg to concatenate a number of audio / video files. | |||
first put all desired files into a list | |||
<code>for f in ./*.mp4; do echo "file '$f'" >> mylist.txt; done</code> | |||
combine files using concat filter | |||
<code>ffmpeg -f concat -safe 0 -i mylist.txt -c copy output.mp4</code> | |||
Note that special characters and spaces can be troublesome. | |||
(Reference: [https://trac.ffmpeg.org/wiki/Concatenate#samecodec] | |||
===Extract Portion of Video/Audio=== | |||
ffmpeg -i sample.avi -ss 00:03:05 -t 00:00:45.0 -q:a 0 -map a sample.mp3 | |||
Use the -ss option to specify the starting timestamp, and the -t option to specify the encoding duration, eg from 3 minutes and 5 seconds in for 45 seconds. The timestamps need to be in HH:MM:SS.xxx format or in seconds. If you don't specify the -t option it will go to the end. | |||
Ref:https://stackoverflow.com/questions/9913032/how-can-i-extract-audio-from-video-with-ffmpeg | |||
Note: This doesn't always work as you expect it. ffmpeg is always jumping around where the end of the file is, in my experience, so | |||
don't be surprised if your extract starts earlier or later than you thought. | |||
===Convert Portion of Video to GIF=== | |||
ffmpeg -i video.mp4 -ss 00:03:05 -t 00:00:05.0 output.gif | |||
===Get Image from Network Stream and Output to Remote Framebuffer=== | |||
<pre>#!/bin/bash | |||
ffmpeg -i http://user:password@ipaddress/videostream -frames:v 1 -y snapshot.jpg | |||
ffmpeg -i snapshot.jpg -s 320x240 -f rawvideo -pix_fmt rgb565 -vcodec rawvideo -r 1 -y output.raw | |||
#scp doesn't work | |||
#scp output.raw root@ipaddress:/dev/fb0 | |||
#trick: # dd if=/dev/mtd0 | ssh me@myhost "dd of=mtd0.img" | |||
#https://unix.stackexchange.com/questions/189722/can-you-scp-a-device-file | |||
dd if=./output.raw | ssh root@ipaddress "dd of=/dev/fb0"</pre> | |||
This is an example of taking an image from a network ip camera and outputting to the framebuffer of a monitor/lcd. The above is not optimized, and is meant as a demonstration. It might be used, e.g. for an RPI with a tft lcd attached. Note that the pixel format and resolution above are for a 16bits per pixel tft that is 320x240. Your framebuffer will likely be different. You can find your parameters by taking a snapshot from the framebuffer with ffmpeg and looking at the output. | |||
===RTSP Streaming with FFmpeg=== | |||
I haven't tested the below command, but I believe it was taken from the forum and should work with hopefully only minor adjustments necessary if any. When in doubt, also see the ffmpeg wiki link here: https://trac.ffmpeg.org/wiki/StreamingGuide and ctrl-f for rtsp. | |||
LIBVA_DRIVER_NAME=i965 ffmpeg -init_hw_device vaapi=foo:/dev/dri/renderD128 -hwaccel vaapi -hwaccel_device foo -i https://stream-eu1-delta.dropcam.com/ne ... ZZZZZZZZZZ -f rtsp -rtsp_transport udp rtsp://192.168.109.29:5545/doorbell | |||
===Overlay second video on a video stream=== | |||
<pre> ffmpeg -i "rtsp://user:pass@ipaddress:554/videostream" | |||
-i myvideo.mp4 -filter_complex overlay -f h264 udp://127.0.0.1:12345 | |||
ffplay udp://localhost:12345 | |||
ffmpeg -i "rtsp://user:pass@ipaddress:554/videostream" -f image2 | |||
-stream_loop -1 -i overlay.png -filter_complex overlay -f h264 udp://127.0.0.1:12345</pre> | |||
Here's an example of overlaying a 2nd video on a live stream from an ip camera, and the | |||
ffplay is viewing the stream. Here the myvideo.mp4 is smaller in resolution than the ip camera stream. You can | |||
also of course overlay images, including transparent images that update. The 3rd example is the proper syntax for this. | |||
===Debugging Media streams=== | |||
ffmpeg -loglevel [info,debug,etc] -i input output.mp4 | |||
ffmpeg -debug mmco -i rtsp://user:password@ipaddress:554/streampath output.mp4 | |||
The first is the standard method to debug. The second is for certain parts of ffmpeg. These may be useful, if you are trying to determine why a camera is failing to connect properly to ffmpeg. (reference book: FFMPEG Basics). Note that the -debug flag has a number of parameters other than mmco (which is only valid for h264) that can be passed. A few other possible values are | |||
buffers, pict, bitstream, rc. | |||
===Demo Camera to Test ZM=== | |||
<pre> | |||
"Is there a way to do a dummy test to see if there's a problem with my software installation? It seems to me that it's quite complex, like instead of a camera, use a file as video output of a camera to test the software installation." | |||
Yeah, with ffmpeg. I do it for a generated monitor that displays weather/alert data. Also to visualize audio from my lorex doorbell (been meaning to write a thread about that, alarms from audio are just crazy specific and has lots of use cases (just think, with zoneminder, how specific you can be when choosing your zones on a audio wave 'showcqt' where your zones are frequencies and you control the amplitude and color variation)). | |||
Here is a simple rtp example to generate a monitor: | |||
</pre> | |||
ffmpeg -re -f lavfi -i testsrc=s=1280x720:r=20 -f rtp_mpegts -pix_fmt yuv420p rtp://127.0.0.1:4004/ | |||
ref:https://forums.zoneminder.com/viewtopic.php?p=130232 | |||
and:http://trac.ffmpeg.org/wiki/FancyFilteringExamples | |||
or use something like the following for an mp4 video: | |||
ffmpeg -re -i movie.mp4 -f rtp_mpegts -pix_fmt yuv420p rtp://127.0.0.1:4004/ | |||
test with zm or alternatively: | |||
ffplay rtp://127.0.0.1:4004 | |||
==Framebuffer Notes== | |||
<small> | |||
Displaying to framebuffer / linux | |||
see also: using a spi connected lcd | |||
example: | |||
https://elinux.org/MiniDisplay_Cape | |||
https://github.com/jeidon/cfa_bmp_loader/blob/master/sample-code/main.c | |||
essentially: init code for hardware, then write via spi. | |||
you can have multiple spi screens, possibly (if you have multiple spi bus'). | |||
see general notes on framebuffer writing here: | |||
https://elinux.org/RPi_Framebuffer | |||
this is interesting. you can get a usb to lcd (16 character) | |||
and just write to it as a serial port. | |||
http://web.archive.org/web/20211207191549/https://hamvoip.org/hamradio/USBLCD/ | |||
there may be higher resolution screens, but... | |||
more research needed. this one looks limited. although | |||
easier to use. | |||
https://www.cnx-software.com/2022/04/29/turing-smart-screen-a-low-cost-3-5-inch-usb-type-c-information-display/ | |||
maybe also consider an uno to a tft shield. | |||
more general notes on writing to fb | |||
https://web.archive.org/web/20210512060006/https://avikdas.com/2019/01/23/writing-gui-applications-on-raspberry-pi-without-x.html | |||
you can also ofc, read from the framebuffer, not just write. | |||
sudo ffmpeg -f fbdev -framerate 1 -i /dev/fb0 -frames:v 1 screenAA3.jpeg | |||
https://stackoverflow.com/questions/71549386/ffmpeg-output-to-framebuffer-fbdev-raspberry-pi-4 | |||
</small> | |||
==See Also== | |||
* [[GPU_passthrough_in_VMWare]] - Some info on using a GPU w/ZM | |||
* FFMPEG Basics by Frantisek Korbel. This book is based around command line usage, and does not necessarily go into detail on the source code. | |||
* https://johnvansickle.com/ffmpeg/ For ready-made binaries | |||
* http://web.archive.org/web/20221123101906/https://img.ly/blog/ultimate-guide-to-ffmpeg/ - tutorial on ffmpeg | |||
[[Category:Dummies_Guide]] | |||
* https://directfb2.github.io | |||
* [[Zmodopipe]] - Some examples of ffmpeg reading from a pipe, outputting to a JPEG file, and also ffserver. | |||
* https://gist.github.com/cbarraco/f6cb40e3f5eb1f2733b5 - Ffmpeg screen sharing (ad-hoc vnc) | |||
* https://wiki.zoneminder.com/How_to_view_the_latest_frame_of_a_camera |
Latest revision as of 18:24, 14 May 2024
ffmpeg is a set of video processing tools used by ZoneMinder to generate video files from the network camera streams.
What is FFFMPEG
One thing to know about ffmpeg is that it is versatile in what inputs and outputs it can use.
e.g.
you can input:
from the desktop screen (there's a couple of these. see the wiki) x11grab from the framebuffer itself fbdev and /dev/fb0 from a network video stream http://ongoingstream.mjpeg or rtsp:// from a udp socket udp://ipaddress:port from a video on your local machine /directory/file from a file on the internet http://justafile.mp4 from a pipe rgbledoutput > ffmpeg
and you can also output to most of these locations. More information is found on the ffmpeg wiki. https://trac.ffmpeg.org/wiki/
Where is ffmpeg in Zoneminder?
See: https://forums.zoneminder.com/viewtopic.php?t=32450
We only use the ffmpeg executable when generating thumbnails and still frame images from the saved mp4. For encoding, we use the LIBRARIES so, you would need to alter the LD_LIBRARY_PATH. encoding works fine on my old nvidia hardware using standard ubuntu packages. Or at least it did the last time I checked. hwaccel should be of great benefit in ENCODING. It is not useful for decoding at this time.
Obtaining FFMPEG
You should first check your distribution's package manager. Aside from that you have the option of compiling from source, or downloading a binary, which are linked from the main ffmpeg website.
Using FFMPEG
Testing a Stream Path with FFMPEG
e.g.
$ ffmpeg -i rtsp://admin:password@192.168.1.64:554/video/1 output.mp4
If ffmpeg is successful it will output the encoding of the stream and the resolution. ffplay can also be used (if you are running a GUI such as X), and is easier in this case. But, if you are testing from a headless machine, use ffmpeg and output to a file.
$ ffplay rtsp://admin:password@192.168.1.64:554/video/1
A note on the RPI
The RPI has its own build of FFMPEG which includes support for the omx and mmal hardware peripherals. It is recommended to obtain it from the official RPI repos. Note that this provides hardware support for exporting, but not necessarily for recording videos (see above paragraphs). (last checked 2020 or so).
FFMPEG Video Export Options
Ffmpeg is used in exporting events to downloadable video files. Exporting video is done using the zmvideo.pl script.
You can control the options that get passed to ffmpeg during the export process using 2 config options found in the Images tab of the options dialog.
FFMPEG_INPUT_OPTIONS
usually leave this empty
FFMPEG_OUTPUT_OPTIONS
In 1.36 these generally are not used. But for historical purposes: here are some possible settings:
To obtain a good quality export x264 based mp4 video file - the following example works...
-r 30 -vcodec libx264 -threads 2 -b 2000k -minrate 800k -maxrate 5000k
If you want as fast as possible h264(with some sacrifice in quality) you can try
-c:v libx264 -preset ultrafast
Examples
Output video to UDP socket
ffmpeg -i myvideo.mp4 -f h264 udp://127.0.0.1:12345
The -f is required and specifies the output format for the udp stream. This could easily be used with say /dev/video0 (webcam) to restream from a small SBC (although you would probably have better luck with mjpeg-streamer, as this solution might not handle disconnects.).
Download Only Part of a Video
ffmpeg -t 5 -i input video_output_first_5_seconds.mp4
Single Screenshot of a Video
ffmpeg -ss 87.52 -i /mnt/zm1/8/2023-01-26/5643528/5643528-video.mp4 -frames:v 1 /mnt/zm1/8/2023-01-26/5643528/01145-capture.jpg
(from forum, this is what ZM uses to make thumbnails for the timeline)
Single Stream Screenshot
For live stream images it's possible to do:
zmu -m <monitor #> -i -U <username> -P <password>
which will output to Monitor##.jpg. This uses the resolution set in Zoneminder. To grab the image direct from the camera it would be something like:
ffmpeg -i http://user:password@ipaddress/videostream -frames:v 1 -y snapshot.jpg
Joining Jpegs
ffmpeg -framerate 5 -i %05d-capture.jpg output.mp4
Use ffmpeg to concatenate jpeg images stored by zoneminder to an mp4. Note that %05d-capture.jpg here means, escape (%), search for numbers (0), search for 5 of them, increment numbers d, then the rest is a string common to all jpg files. Edit framerate as needed. This is the format used by Zoneminder to store jpegs.
(Reference: [1])
While the above would be used for Zoneminder, a non-ZM solution might use the glob feature of Ffmpeg. (note that you must pass the -pattern_type glob, you can't simple use an asterisk on its own)
ffmpeg -r 1 -pattern_type glob -i 'test_*.jpg' -c:v libx264 out.mp4
https://superuser.com/questions/624567/how-to-create-a-video-from-images-using-ffmpeg
Joining Videos
Use ffmpeg to concatenate a number of audio / video files.
first put all desired files into a list
for f in ./*.mp4; do echo "file '$f'" >> mylist.txt; done
combine files using concat filter
ffmpeg -f concat -safe 0 -i mylist.txt -c copy output.mp4
Note that special characters and spaces can be troublesome.
(Reference: [2]
Extract Portion of Video/Audio
ffmpeg -i sample.avi -ss 00:03:05 -t 00:00:45.0 -q:a 0 -map a sample.mp3
Use the -ss option to specify the starting timestamp, and the -t option to specify the encoding duration, eg from 3 minutes and 5 seconds in for 45 seconds. The timestamps need to be in HH:MM:SS.xxx format or in seconds. If you don't specify the -t option it will go to the end.
Ref:https://stackoverflow.com/questions/9913032/how-can-i-extract-audio-from-video-with-ffmpeg
Note: This doesn't always work as you expect it. ffmpeg is always jumping around where the end of the file is, in my experience, so don't be surprised if your extract starts earlier or later than you thought.
Convert Portion of Video to GIF
ffmpeg -i video.mp4 -ss 00:03:05 -t 00:00:05.0 output.gif
Get Image from Network Stream and Output to Remote Framebuffer
#!/bin/bash ffmpeg -i http://user:password@ipaddress/videostream -frames:v 1 -y snapshot.jpg ffmpeg -i snapshot.jpg -s 320x240 -f rawvideo -pix_fmt rgb565 -vcodec rawvideo -r 1 -y output.raw #scp doesn't work #scp output.raw root@ipaddress:/dev/fb0 #trick: # dd if=/dev/mtd0 | ssh me@myhost "dd of=mtd0.img" #https://unix.stackexchange.com/questions/189722/can-you-scp-a-device-file dd if=./output.raw | ssh root@ipaddress "dd of=/dev/fb0"
This is an example of taking an image from a network ip camera and outputting to the framebuffer of a monitor/lcd. The above is not optimized, and is meant as a demonstration. It might be used, e.g. for an RPI with a tft lcd attached. Note that the pixel format and resolution above are for a 16bits per pixel tft that is 320x240. Your framebuffer will likely be different. You can find your parameters by taking a snapshot from the framebuffer with ffmpeg and looking at the output.
RTSP Streaming with FFmpeg
I haven't tested the below command, but I believe it was taken from the forum and should work with hopefully only minor adjustments necessary if any. When in doubt, also see the ffmpeg wiki link here: https://trac.ffmpeg.org/wiki/StreamingGuide and ctrl-f for rtsp.
LIBVA_DRIVER_NAME=i965 ffmpeg -init_hw_device vaapi=foo:/dev/dri/renderD128 -hwaccel vaapi -hwaccel_device foo -i https://stream-eu1-delta.dropcam.com/ne ... ZZZZZZZZZZ -f rtsp -rtsp_transport udp rtsp://192.168.109.29:5545/doorbell
Overlay second video on a video stream
ffmpeg -i "rtsp://user:pass@ipaddress:554/videostream" -i myvideo.mp4 -filter_complex overlay -f h264 udp://127.0.0.1:12345 ffplay udp://localhost:12345 ffmpeg -i "rtsp://user:pass@ipaddress:554/videostream" -f image2 -stream_loop -1 -i overlay.png -filter_complex overlay -f h264 udp://127.0.0.1:12345
Here's an example of overlaying a 2nd video on a live stream from an ip camera, and the ffplay is viewing the stream. Here the myvideo.mp4 is smaller in resolution than the ip camera stream. You can also of course overlay images, including transparent images that update. The 3rd example is the proper syntax for this.
Debugging Media streams
ffmpeg -loglevel [info,debug,etc] -i input output.mp4 ffmpeg -debug mmco -i rtsp://user:password@ipaddress:554/streampath output.mp4
The first is the standard method to debug. The second is for certain parts of ffmpeg. These may be useful, if you are trying to determine why a camera is failing to connect properly to ffmpeg. (reference book: FFMPEG Basics). Note that the -debug flag has a number of parameters other than mmco (which is only valid for h264) that can be passed. A few other possible values are buffers, pict, bitstream, rc.
Demo Camera to Test ZM
"Is there a way to do a dummy test to see if there's a problem with my software installation? It seems to me that it's quite complex, like instead of a camera, use a file as video output of a camera to test the software installation." Yeah, with ffmpeg. I do it for a generated monitor that displays weather/alert data. Also to visualize audio from my lorex doorbell (been meaning to write a thread about that, alarms from audio are just crazy specific and has lots of use cases (just think, with zoneminder, how specific you can be when choosing your zones on a audio wave 'showcqt' where your zones are frequencies and you control the amplitude and color variation)). Here is a simple rtp example to generate a monitor:
ffmpeg -re -f lavfi -i testsrc=s=1280x720:r=20 -f rtp_mpegts -pix_fmt yuv420p rtp://127.0.0.1:4004/
ref:https://forums.zoneminder.com/viewtopic.php?p=130232 and:http://trac.ffmpeg.org/wiki/FancyFilteringExamples
or use something like the following for an mp4 video:
ffmpeg -re -i movie.mp4 -f rtp_mpegts -pix_fmt yuv420p rtp://127.0.0.1:4004/
test with zm or alternatively:
ffplay rtp://127.0.0.1:4004
Framebuffer Notes
Displaying to framebuffer / linux see also: using a spi connected lcd example: https://elinux.org/MiniDisplay_Cape https://github.com/jeidon/cfa_bmp_loader/blob/master/sample-code/main.c essentially: init code for hardware, then write via spi. you can have multiple spi screens, possibly (if you have multiple spi bus').
see general notes on framebuffer writing here: https://elinux.org/RPi_Framebuffer
this is interesting. you can get a usb to lcd (16 character) and just write to it as a serial port. http://web.archive.org/web/20211207191549/https://hamvoip.org/hamradio/USBLCD/
there may be higher resolution screens, but... more research needed. this one looks limited. although easier to use. https://www.cnx-software.com/2022/04/29/turing-smart-screen-a-low-cost-3-5-inch-usb-type-c-information-display/ maybe also consider an uno to a tft shield.
more general notes on writing to fb https://web.archive.org/web/20210512060006/https://avikdas.com/2019/01/23/writing-gui-applications-on-raspberry-pi-without-x.html
you can also ofc, read from the framebuffer, not just write.
sudo ffmpeg -f fbdev -framerate 1 -i /dev/fb0 -frames:v 1 screenAA3.jpeg
https://stackoverflow.com/questions/71549386/ffmpeg-output-to-framebuffer-fbdev-raspberry-pi-4
See Also
- GPU_passthrough_in_VMWare - Some info on using a GPU w/ZM
- FFMPEG Basics by Frantisek Korbel. This book is based around command line usage, and does not necessarily go into detail on the source code.
- https://johnvansickle.com/ffmpeg/ For ready-made binaries
- http://web.archive.org/web/20221123101906/https://img.ly/blog/ultimate-guide-to-ffmpeg/ - tutorial on ffmpeg
- Zmodopipe - Some examples of ffmpeg reading from a pipe, outputting to a JPEG file, and also ffserver.
- https://gist.github.com/cbarraco/f6cb40e3f5eb1f2733b5 - Ffmpeg screen sharing (ad-hoc vnc)