2015-02-23 147 views
1

我試圖捕獲直接連接到nVidia Jetson TK1中的迷你PCIe雙千兆位擴展卡的兩個IP攝像頭的流。IP攝像頭捕獲

我取得捕捉到的使用GStreamer的下一個命令兩個相機流:

gst-launch-0.10 rtspsrc location=rtsp://admin:[email protected]:554/mpeg4cif latency=0 ! decodebin ! ffmpegcolorspace ! autovideosink rtspsrc location=rtsp://admin:[email protected]:554/mpeg4cif latency=0 ! decodebin ! ffmpegcolorspace ! autovideosink 

它顯示每臺攝像機一個窗口,但給出了這樣的輸出只是當拍攝開始:

WARNING: from element /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink1/GstXvImageSink:autovideosink1-actual-sink-xvimage: A lot of buffers are being dropped. 
Additional debug info: 
gstbasesink.c(2875): gst_base_sink_is_too_late(): /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink1/GstXvImageSink:autovideosink1-actual-sink-xvimage: 
There may be a timestamping problem, or this computer is too slow. 
---> TVMR: Video-conferencing detected !!!!!!!!! 

流播放良好,相機之間也有「良好」的同步,但過了一段時間,突然有一臺攝像機停下來,通常幾秒鐘後另一臺也停下來。使用像Wireshark這樣的接口snifer,我可以檢查rtsp數據包是否仍然從相機發送。

我的目的是使用這臺攝像機作爲使用openCV的立體相機。我能夠與OpenCV中捕捉到流具有以下功能:

camera[0].open("rtsp://admin:[email protected]:554/mpeg4cif");//right 
camera[1].open("rtsp://admin:[email protected]:554/mpeg4cif");//left 

它randomnly開始捕獲好壞,同步或不同步,以延遲或沒有,但一段時間後是無法使用拍攝的圖像你可以在圖像中觀察到:

enter image description here

和輸出運行OpenCV的程序時通常是這樣的:(我抄了最完整的一個)

[h264 @ 0x1b9580] slice type too large (2) at 0 23 
[h264 @ 0x1b9580] decode_slice_header error 

[h264 @ 0x1b1160] left block unavailable for requested intra mode at 0 6 
[h264 @ 0x1b1160] error while decoding MB 0 6, bytestream (-1) 

[h264 @ 0x1b1160] mmco: unref short failure 

[h264 @ 0x1b9580] too many reference frames 

[h264 @ 0x1b1160] pps_id (-1) out of range 

用過的相機是兩個SIP-1080J模塊。

任何人都知道如何使用openCV實現良好的捕捉?首先擺脫這些h264消息,並在程序執行時獲得穩定的圖像。

如果不是,我該如何使用gstreamer改進管道和緩衝區,以便在突然停止流的情況下獲得良好的捕獲?雖然我從來沒有通過使用gstreamer的openCV獲取,也許有一天我會知道如何去做並解決這個問題。

非常感謝。

+0

嘗試播放編碼器參數 - 首先嚐試使用基線配置文件,降低比特率和gop大小,如果使用udp並體驗數據包丟失,請嘗試使用tcp。實際上,對於Wireshark,您應該能夠看到RTP序列號是否是順序的。關於參考幀的錯誤(可能由丟棄/重新排序幀或錯誤編碼引起)是指示圖像損壞和可見僞像的錯誤。 – 2015-02-25 09:24:57

+0

幾個隨機的東西可以嘗試:如果你從不同的gst-launch實例啓動它們,管道行爲是否有不同?您是否嘗試將「sync = false」添加到autovideosink?你有沒有試過增加rtspsrc上的延遲參數?您是否可以訪問GStreamer的新版本?您使用的編碼器設置是什麼? – mpr 2015-02-25 13:54:26

+0

Rudolfs,它使用UDP。沒有丟包,每個包都按順序接收。我不知道如何改變配置文件,他們是廉價的IP攝像機,只有Windows可用的基本軟件。至少我可以用這個軟件改變攝像頭的IP地址。我正在使用L4T(Linux for Tegra,Ubuntu 14.04)。 – masana 2015-02-26 09:27:11

回答

3

經過幾天的深度搜索和一些嘗試,我直接打開使用gstreamer-0.10 API。首先,我學會了如何將它與http://docs.gstreamer.com/pages/viewpage.action?pageId=327735

的教程一起使用。對於大多數教程,您只需要安裝libgstreamer0.10-dev和其他一些軟件包。我安裝了所有的人:

sudo apt-get install libgstreamer0* 

然後複製您想嘗試進入從那裏.c文件的位置(在一些例子中,你必須在文件夾中的終端.c文件和類型的例子的代碼添加更多庫到pkg-config):

gcc basic-tutorial-1.c $(pkg-config --cflags --libs gstreamer-0.10) -o basic-tutorial-1.c 

之後,我不覺得迷路了,我開始嘗試混合一些c和C++代碼。您可以使用適當的g ++命令或使用CMakeLists.txt或您想要的方式編譯它...在使用nVidia Jetson TK1進行開發時,我使用Nsight Eclipse Edition,並且需要將項目屬性正確配置爲能夠使用gstreamer-0.10 libs和openCV庫。

混合一些代碼,最後我能夠實時捕獲我的兩個IP攝像機的流而沒有明顯的延遲,沒有任何幀中的錯誤解碼,並且兩個流同步。唯一剩下的是我還沒有解決的問題是在顏色幀的獲取,而不是在灰度時(我已經用「分段故障」結果的其它CV_值試過):

v = Mat(Size(640, 360),CV_8U, (char*)GST_BUFFER_DATA(gstImageBuffer)); 

的完整源代碼接下來我使用gstreamer捕獲,將捕獲轉換爲openCV Mat對象,然後顯示它。該代碼僅用於捕獲一臺IP攝像機。您可以同時複製用於捕獲多臺攝像機的對象和方法。

#include <opencv2/core/core.hpp> 
#include <opencv2/contrib/contrib.hpp> 
#include <opencv2/highgui/highgui.hpp> 
#include <opencv2/imgproc/imgproc.hpp> 
#include <opencv2/video/video.hpp> 

#include <gst/gst.h> 
#include <gst/app/gstappsink.h> 
#include <gst/app/gstappbuffer.h> 
#include <glib.h> 

#define DEFAULT_LATENCY_MS 1 

using namespace cv; 

typedef struct _vc_cfg_data { 
    char server_ip_addr[100]; 
} vc_cfg_data; 

typedef struct _vc_gst_data { 
    GMainLoop *loop; 
    GMainContext *context; 
    GstElement *pipeline; 
    GstElement *rtspsrc,*depayloader, *decoder, *converter, *sink; 
    GstPad *recv_rtp_src_pad; 
} vc_gst_data; 

typedef struct _vc_data { 
    vc_gst_data gst_data; 
    vc_cfg_data cfg; 
} vc_data; 

/* Global data */ 
vc_data app_data; 

static void vc_pad_added_handler (GstElement *src, GstPad *new_pad, vc_data *data); 


#define VC_CHECK_ELEMENT_ERROR(e, name) \ 
if (!e) { \ 
g_printerr ("Element %s could not be created. Exiting.\n", name); \ 
return -1; \ 
} 

/******************************************************************************* 
Gstreamer pipeline creation and init 
*******************************************************************************/ 
int vc_gst_pipeline_init(vc_data *data) 
{ 
    GstStateChangeReturn ret; 

    // Template 
    GstPadTemplate* rtspsrc_pad_template; 

    // Create a new GMainLoop 
    data->gst_data.loop = g_main_loop_new (NULL, FALSE); 
    data->gst_data.context = g_main_loop_get_context(data->gst_data.loop); 

    // Create gstreamer elements 
    data->gst_data.pipeline = gst_pipeline_new ("videoclient"); 
    VC_CHECK_ELEMENT_ERROR(data->gst_data.pipeline, "pipeline"); 

    //RTP UDP Source - for received RTP messages 
    data->gst_data.rtspsrc = gst_element_factory_make ("rtspsrc", "rtspsrc"); 
    VC_CHECK_ELEMENT_ERROR(data->gst_data.rtspsrc,"rtspsrc"); 

    printf("URL: %s\n",data->cfg.server_ip_addr); 
    g_print ("Setting RTSP source properties: \n"); 
    g_object_set (G_OBJECT (data->gst_data.rtspsrc), "location", data->cfg.server_ip_addr, "latency", DEFAULT_LATENCY_MS, NULL); 

    //RTP H.264 Depayloader 
    data->gst_data.depayloader = gst_element_factory_make ("rtph264depay","depayloader"); 
    VC_CHECK_ELEMENT_ERROR(data->gst_data.depayloader,"rtph264depay"); 

    //ffmpeg decoder 
    data->gst_data.decoder = gst_element_factory_make ("ffdec_h264", "decoder"); 
    VC_CHECK_ELEMENT_ERROR(data->gst_data.decoder,"ffdec_h264"); 

    data->gst_data.converter = gst_element_factory_make ("ffmpegcolorspace", "converter"); 
    VC_CHECK_ELEMENT_ERROR(data->gst_data.converter,"ffmpegcolorspace"); 

    // i.MX Video sink 
    data->gst_data.sink = gst_element_factory_make ("appsink", "sink"); 
    VC_CHECK_ELEMENT_ERROR(data->gst_data.sink,"appsink"); 
    gst_app_sink_set_max_buffers((GstAppSink*)data->gst_data.sink, 1); 
    gst_app_sink_set_drop ((GstAppSink*)data->gst_data.sink, TRUE); 
    g_object_set (G_OBJECT (data->gst_data.sink),"sync", FALSE, NULL); 

    //Request pads from rtpbin, starting with the RTP receive sink pad, 
    //This pad receives RTP data from the network (rtp-udpsrc). 
    rtspsrc_pad_template = gst_element_class_get_pad_template (GST_ELEMENT_GET_CLASS (data->gst_data.rtspsrc),"recv_rtp_src_0"); 

    // Use the template to request the pad 
    data->gst_data.recv_rtp_src_pad = gst_element_request_pad (data->gst_data.rtspsrc, rtspsrc_pad_template, 
    "recv_rtp_src_0", NULL); 

    // Print the name for confirmation 
    g_print ("A new pad %s was created\n", 
    gst_pad_get_name (data->gst_data.recv_rtp_src_pad)); 

    // Add elements into the pipeline 
    g_print(" Adding elements to pipeline...\n"); 
    gst_bin_add_many (GST_BIN (data->gst_data.pipeline), 
      data->gst_data.rtspsrc, 
      data->gst_data.depayloader, 
      data->gst_data.decoder, 
      data->gst_data.converter, 
      data->gst_data.sink, 
     NULL); 

    // Link some of the elements together 
    g_print(" Linking some elements ...\n"); 
    if(!gst_element_link_many (data->gst_data.depayloader, data->gst_data.decoder, data->gst_data.converter, data->gst_data.sink, NULL)) 
     g_print("Error: could not link all elements\n"); 

    // Connect to the pad-added signal for the rtpbin. This allows us to link 
    //the dynamic RTP source pad to the depayloader when it is created. 
    if(!g_signal_connect (data->gst_data.rtspsrc, "pad-added", 
    G_CALLBACK (vc_pad_added_handler), data)) 
     g_print("Error: could not add signal handler\n"); 

    // Set the pipeline to "playing" state 
    g_print ("Now playing A\n"); 
    ret = gst_element_set_state (data->gst_data.pipeline, GST_STATE_PLAYING); 
    if (ret == GST_STATE_CHANGE_FAILURE) { 
     g_printerr ("Unable to set the pipeline A to the playing state.\n"); 
     gst_object_unref (data->gst_data.pipeline); 
     return -1; 
    } 

    return 0; 
} 

static void vc_pad_added_handler (GstElement *src, GstPad *new_pad, vc_data *data) { 
    GstPad *sink_pad = gst_element_get_static_pad (data->gst_data.depayloader, "sink"); 
    GstPadLinkReturn ret; 
    GstCaps *new_pad_caps = NULL; 
    GstStructure *new_pad_struct = NULL; 
    const gchar *new_pad_type = NULL; 
    g_print ("Received new pad '%s' from '%s':\n", GST_PAD_NAME (new_pad), GST_ELEMENT_NAME (src)); 

    /* Check the new pad's name */ 
    if (!g_str_has_prefix (GST_PAD_NAME (new_pad), "recv_rtp_src_")) { 
     g_print (" It is not the right pad. Need recv_rtp_src_. Ignoring.\n"); 
     goto exit; 
    } 

    /* If our converter is already linked, we have nothing to do here */ 
    if (gst_pad_is_linked (sink_pad)) { 
     g_print (" Sink pad from %s already linked. Ignoring.\n", GST_ELEMENT_NAME (src)); 
     goto exit; 
    } 

    /* Check the new pad's type */ 
    new_pad_caps = gst_pad_get_caps (new_pad); 
    new_pad_struct = gst_caps_get_structure (new_pad_caps, 0); 
    new_pad_type = gst_structure_get_name (new_pad_struct); 

    /* Attempt the link */ 
    ret = gst_pad_link (new_pad, sink_pad); 
    if (GST_PAD_LINK_FAILED (ret)) { 
     g_print (" Type is '%s' but link failed.\n", new_pad_type); 
    } else { 
     g_print (" Link succeeded (type '%s').\n", new_pad_type); 
    } 

    exit: 
    /* Unreference the new pad's caps, if we got them */ 
    if (new_pad_caps != NULL) 
     gst_caps_unref (new_pad_caps); 
    /* Unreference the sink pad */ 
    gst_object_unref (sink_pad); 
} 



int vc_gst_pipeline_clean(vc_data *data) { 
    GstStateChangeReturn ret; 
    GstStateChangeReturn ret2; 

    /* Cleanup Gstreamer */ 
    if(!data->gst_data.pipeline) 
     return 0; 

    /* Send the main loop a quit signal */ 
    g_main_loop_quit(data->gst_data.loop); 
    g_main_loop_unref(data->gst_data.loop); 
    ret = gst_element_set_state (data->gst_data.pipeline, GST_STATE_NULL); 
    if (ret == GST_STATE_CHANGE_FAILURE) { 
     g_printerr ("Unable to set the pipeline A to the NULL state.\n"); 
     gst_object_unref (data->gst_data.pipeline); 
     return -1; 
    } 

    g_print ("Deleting pipeline\n"); 
    gst_object_unref (GST_OBJECT (data->gst_data.pipeline)); 
    /* Zero out the structure */ 
    memset(&data->gst_data, 0, sizeof(vc_gst_data)); 
    return 0; 
} 


void handleKey(char key) 
{ 
    switch (key) 
    { 
    case 27: 

     break; 
    } 
} 


int vc_mainloop(vc_data* data) 
{ 

    GstBuffer *gstImageBuffer; 

    Mat v; 

    namedWindow("view",WINDOW_NORMAL); 

    while (1) { 

     gstImageBuffer = gst_app_sink_pull_buffer((GstAppSink*)data->gst_data.sink); 

     if (gstImageBuffer != NULL) 
     { 
       v = Mat(Size(640, 360),CV_8U, (char*)GST_BUFFER_DATA(gstImageBuffer)); 

       imshow("view", v); 

       handleKey((char)waitKey(3)); 

       gst_buffer_unref(gstImageBuffer); 
     }else{ 
      g_print("gsink buffer didn't return buffer."); 
     } 
    } 
    return 0; 
} 


int main (int argc, char *argv[]) 
{ 
    setenv("DISPLAY", ":0", 0); 

    strcpy(app_data.cfg.server_ip_addr, "rtsp://admin:[email protected]:554/mpeg4cif"); 

    gst_init (&argc, &argv); 

    if(vc_gst_pipeline_init(&app_data) == -1) { 
     printf("Gstreamer pipeline creation and init failed\n"); 
     goto cleanup; 
    } 

    vc_mainloop(&app_data); 

    printf ("Returned, stopping playback\n"); 
    cleanup: 
    return vc_gst_pipeline_clean(&app_data); 
    return 0; 
} 

我希望這有助於! ;)