Input frames start on the GPU as ID3D11Texture2D pointers. I encode them to H264 using FFMPEG + NVENC. NVENC works perfectly if I download the textures to CPU memory as format AV_PIX_FMT_BGR0, but I’d like to cut out the CPU texture download entirely, and pass the GPU memory pointer directly into the encoder in native ..
I am trying to create a video from my images on my Windows 10. The images are named lu_1.jpg, lu_2.jpg and so on up to 1000. Now I tried this code in a .bat file to get the video: ffmpeg -framerate 1 -i lu-%d.jpg -c:v libx264 -r 30 ../lu_output.mp4 Unfortunately I get the following error ..
I experienced the following issue. I’m using ffmpeg to batch resize/compress photos and videos on Windows 10. I used a version 3 and since upgraded to ver 4.4 of ffmpeg the quality of outputs dropped. I found out that I need to modify my script and explicitly define the output encoder, however when defining h264 ..
I am trying to create a remote desktop application and I have a pretty good idea on how I’ll capture the screen, encode it to h.264 or HEVC and then write it to a file. In my case though, I want to have this encoded stream sent over UDP, but without the in-built streaming method ..
I’m trying to recode fourcc V210 (which is a packed YUV4:2:2 format) into a P010 (planar YUV4:2:0) I think I’ve implemented it according to spec but the renderer is giving a green image so something is off. Decoding the V210 has a decent example in ffmpeg (defines are modified from their solution) but I can’t ..
I am trying batch convert WAV files that are stereo channel to mono channel using ffmpeg in Windows. Source: Windows..
I’m running ffmpeg v4.4 under Windows 10. I’ve got a sequence of ~20k images with the filenames all padded with 8 zeros that I’d like to turn into a gif. Filenames are: frame00000000.gif frame00000001.gif frame00000002.gif …etc. But no matter what I try for the -i parameter, ffmpeg throws a No such file or directory error ..
I am just wondering if there is a way to receive status and error information from ffmpeg for an active stream, in C++. I’d like to be able to monitor what’s going on with an encoded or decoded stream and react accordingly to errors that might be seen. Something with status/error codes would be great, ..
My environment is Raspberry Pi 3B/Raspberry Pi OS, and I’m building remotely from Visual Studio 2019 on Windows 10. I want to play PCM audio decoded by FFmpeg in ALSA(C language API). But this code returns Underrun error(EPIPE) at "snd_pcm_writei" and plays a cracky sound. Probably the way the resources are managed is wrong, but ..
Long story short: H264 SDI video stream with embedded timestamp ( KLV VANC ancillary packet – type 02 KLV metadata packet in VANC space) Need to decode and process frames in ‘real-time’. I am using openCV for image processing BUT this discards critical ancillary data (embedded timestamps). How could I decode the h264 stream with ..