2016-02-02 69 views
3

讀取視頻我試圖達到的效果上的視頻(方法3使用netcat的),如圖 https://www.youtube.com/watch?v=sYGdge3T30oOpenCV的Python中,從命名管道

的要點是用它來流從覆盆子PI視頻到Ubuntu PC和過程openCV和python。

我用命令

raspivid -vf -n -w 640 -h 480 -o - -t 0 -b 2000000 | nc 192.168.0.20 5777

的影音串流到我的電腦,然後在電腦上我創建命名管道「先進先出」和重定向輸出

nc -l -p 5777 -v > fifo 

然後我試圖讀取管道並在python腳本中顯示結果

import cv2 
import sys 

video_capture = cv2.VideoCapture(r'fifo') 
video_capture.set(cv2.CAP_PROP_FRAME_WIDTH, 640); 
video_capture.set(cv2.CAP_PROP_FRAME_HEIGHT, 480); 

while True: 
    # Capture frame-by-frame 
    ret, frame = video_capture.read() 
    if ret == False: 
     pass 

    cv2.imshow('Video', frame) 

    if cv2.waitKey(1) & 0xFF == ord('q'): 
     break 

# When everything is done, release the capture 
video_capture.release() 
cv2.destroyAllWindows() 

但是我剛結束了的誤差

[MP3 @ 0x18b2940]部首缺少該錯誤是由命令video_capture = cv2.VideoCapture(r'fifo')

當我重定向PC的netcat的輸出到一個文件,然後讀取它產生在蟒蛇視頻工作,但它加快了10倍左右。

我知道問題是與python腳本,因爲nc傳輸工作(對一個文件),但我無法找到任何線索。

如何獲得所提供視頻(方法3)中顯示的結果?

回答

2

我也想在視頻中達到相同的效果。起初,我嘗試了類似的方法,但似乎cv2.VideoCapture()無法從命名管道讀取,需要進行更多的預處理。

ffmpeg是要走的路!您可以安裝,並按照此鏈接提供的說明編譯的ffmpeg: https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu

一旦它被安裝,你可以改變你的代碼如下所示:

import cv2 
import subprocess as sp 
import numpy 

FFMPEG_BIN = "ffmpeg" 
command = [ FFMPEG_BIN, 
     '-i', 'fifo',    # fifo is the named pipe 
     '-pix_fmt', 'bgr24',  # opencv requires bgr24 pixel format. 
     '-vcodec', 'rawvideo', 
     '-an','-sn',    # we want to disable audio processing (there is no audio) 
     '-f', 'image2pipe', '-']  
pipe = sp.Popen(command, stdout = sp.PIPE, bufsize=10**8) 

while True: 
    # Capture frame-by-frame 
    raw_image = pipe.stdout.read(640*480*3) 
    # transform the byte read into a numpy array 
    image = numpy.fromstring(raw_image, dtype='uint8') 
    image = image.reshape((480,640,3))   # Notice how height is specified first and then width 
    if image is not None: 
     cv2.imshow('Video', image) 

    if cv2.waitKey(1) & 0xFF == ord('q'): 
     break 
    pipe.stdout.flush() 

cv2.destroyAllWindows() 

無需改變對任何其他東西覆盆子pi側劇本。

這對我來說就像一個魅力。視頻滯後可以忽略不計。 希望它有幫助。

+0

我想這是在Linux桌面上運行的部分執行createStream.py,但你似乎並沒有表現出什麼需要在Raspberry Pi上運行,或者如何運行兩臺機器設置的任一端? –

+0

好吧,我們試圖通過@Richard提到的視頻(方法3)https://www.youtube.com/watch?v=sYGdge3T30o來獲得結果。每一樣東西都與視頻中解釋的一樣。我只是想幫助python腳本從視頻中未顯示的命名管道讀取數據。 –

+0

我希望這會讓我使用ffmpeg的命令行參數來通過qsv和h264_qsv強制硬件解碼,而不是OpenCV的隱藏默認值。雖然在技術上這個答案確實如此,但我實際上看到了使用cv2.VideoCapture('filename.mp4')的速度下降。我得到約111fps而不是259fps。 (在同一個系統上ffmpeg解碼爲null會超過1100fps)。我認爲這可能是因爲所有的數據都在傳播。至少有良好的概念驗證。 – TheAtomicOption

0

我有我工作的一個類似的問題,多一點研究,我終於偶然發現了以下內容:

跳到解決方案:https://stackoverflow.com/a/48675107/2355051

我結束了適應這種picamera python recipe

在樹莓裨:(createStream.py)

import io 
import socket 
import struct 
import time 
import picamera 

# Connect a client socket to my_server:8000 (change my_server to the 
# hostname of your server) 
client_socket = socket.socket() 
client_socket.connect(('10.0.0.3', 777)) 

# Make a file-like object out of the connection 
connection = client_socket.makefile('wb') 
try: 
    with picamera.PiCamera() as camera: 
     camera.resolution = (1024, 768) 
     # Start a preview and let the camera warm up for 2 seconds 
     camera.start_preview() 
     time.sleep(2) 

     # Note the start time and construct a stream to hold image data 
     # temporarily (we could write it directly to connection but in this 
     # case we want to find out the size of each capture first to keep 
     # our protocol simple) 
     start = time.time() 
     stream = io.BytesIO() 
     for foo in camera.capture_continuous(stream, 'jpeg', use_video_port=True): 
      # Write the length of the capture to the stream and flush to 
      # ensure it actually gets sent 
      connection.write(struct.pack('<L', stream.tell())) 
      connection.flush() 

      # Rewind the stream and send the image data over the wire 
      stream.seek(0) 
      connection.write(stream.read()) 

      # Reset the stream for the next capture 
      stream.seek(0) 
      stream.truncate() 
    # Write a length of zero to the stream to signal we're done 
    connection.write(struct.pack('<L', 0)) 
finally: 
    connection.close() 
    client_socket.close() 

在正在處理流的機器:( processStream。py)

import io 
import socket 
import struct 
import cv2 
import numpy as np 

# Start a socket listening for connections on 0.0.0.0:8000 (0.0.0.0 means 
# all interfaces) 
server_socket = socket.socket() 
server_socket.bind(('0.0.0.0', 777)) 
server_socket.listen(0) 

# Accept a single connection and make a file-like object out of it 
connection = server_socket.accept()[0].makefile('rb') 
try: 
    while True: 
     # Read the length of the image as a 32-bit unsigned int. If the 
     # length is zero, quit the loop 
     image_len = struct.unpack('<L', connection.read(struct.calcsize('<L')))[0] 
     if not image_len: 
      break 
     # Construct a stream to hold the image data and read the image 
     # data from the connection 
     image_stream = io.BytesIO() 
     image_stream.write(connection.read(image_len)) 
     # Rewind the stream, open it as an image with opencv and do some 
     # processing on it 
     image_stream.seek(0) 
     image = Image.open(image_stream) 

     data = np.fromstring(image_stream.getvalue(), dtype=np.uint8) 
     imagedisp = cv2.imdecode(data, 1) 

     cv2.imshow("Frame",imagedisp) 
     cv2.waitKey(1) #imshow will not output an image if you do not use waitKey 
     cv2.destroyAllWindows() #cleanup windows 
finally: 
    connection.close() 
    server_socket.close() 

此解決方案與我原始問題中引用的視頻I具有相似的結果。較大的分辨率幀會增加Feed的延遲,但這對我的應用程序來說是可以忍受的。

首先,你需要運行processStream.py,然後在樹莓派