gaqknowledge.blogg.se

Ffmpeg rtsp python
Ffmpeg rtsp python












# Read video width, height and framerate using OpenCV (use it if you don't know the size of the video frames). In_stream = 'rtsp:///vod/mp4:BigBuckBunny_115k.mov' The following code is a complete "working sample" that uses public RTSP Stream for testing: import cv2 (If it does work, you can try capturing using OpenCV instead of FFmpeg). The code sample below, includes a part that reads width and height using cv2.VideoCapture, but I am not sure if it's going to work in your case (due to '-rtsp_flags', 'listen'. The solution assumes, you know the video frame size ( width and height) from advance. Now you can show the frame calling cv2.imshow('image', frame). Read the raw video frame from p1.stdout: raw_frame = p1.stdout.read(width*height*3)Ĭonvert the bytes read into a NumPy array, and reshape it to video frame dimensions: frame = np.fromstring(raw_frame, np.uint8)įrame = frame.reshape((height, width, 3)) '-vcodec', 'rawvideo', # Get rawvideo output format. '-pix_fmt', 'bgr24', # Set BGR pixel format '-f', 'image2pipe', # Use image2pipe demuxer You can read the decoded frame from p1.stdout, convert it to NumPy array, and reshape it.Ĭhange command to get decoded frames in rawvideo format and BGR pixel format: command = ['C:/ffmpeg/bin/ffmpeg.exe', P1 = subprocess.Popen(command, stdin=subprocess.PIPE, stdout=subprocess.PIPE)ĭoes anyone knows what I need to change in either of those scripts in order to get i to work? On my PC if I use the following python script, the stream begins, but it fails in the cv2.imshow function because I am not sure how to decode it: import subprocess On my PC if I use the following FFplay command on a terminal, it works and it displays the stream in real time :įfplay -rtsp_flags listen rtsp://192.168.1.xxxx:5555/live.sdp?tcp GrabResult = camera.RetrieveResult(100, pylon.TimeoutHandling_ThrowException) While camera.IsGrabbing(): # send images as stream until Ctrl-C P = subprocess.Popen(command, stdin=subprocess.PIPE) Minimaly reprsented, that's the code I use on the raspberry pi side of things : command = ['ffmpeg', From that port, I am able to display the stream using FFplay.įFplay is great for testing out quickly and easily if the direction you are heading is correct, but I want to "read" every frame from the stream, do some processing and then displaying the stream with opencv. I manage to set up a python script on the raspberry pi which is responsible for recording the frames, feed them to a pipe and streaming them to a tcp port. I have a basler camera connected to a raspberry pi, and I am trying to livestream it's feed with FFmpg to a tcp port in my windows PC in order to monitor whats happening in front of the camera.














Ffmpeg rtsp python