2012-07-04 9 views
4

編輯:升級到OpenCV 2.4.2和FFMPEG 0.11.1似乎已經解決了所有的錯誤和連接問題,但它仍然沒有解決慢速,降低幀率。OpenCV網絡(IP)相機幀每秒緩慢初始突發後

我在Ubuntu 12.04中使用了默認的OpenCV包,我相信它是2.3.1。我連接到流MJPEG的Foscam FI8910W。我看過人們說過最好的方法是從it is faster than the gstreamer solution開始使用opencv + libjpeg + curl。但是,我可以偶爾(50%的時間)從OpenCV連接到攝像機,因爲它已經建好並獲得視頻流。這個流以大約30 fps開始,持續約1秒,然後減慢到5-10 fps。我正在研究的這個項目需要6臺攝像機以15-30fps的速度運行(速度更快)。

這裏是我的問題:

  1. 這是固定在2.4.2一個問題,我應該只是 升級?
  2. 如果沒有,我有什麼想法,爲什麼我得到一個短的爆炸,然後它減慢?
  3. 最好的解決方案仍然是使用curl + libjpeg嗎?
  4. 我看到很多人說解決方案已經發布,但很少有實際的解決方案鏈接到帖子的實際鏈接。讓所有實際的 解決方案(包括curl和gstreamer)在一個地方引用將是 非常方便根據http://opencv-users.1802565.n2.nabble.com/IP-camera-solution-td7345005.html

這裏是我的代碼:

VideoCapture cap; 
    cap.open("http://10.10.1.10/videostream.asf?user=test&pwd=1234&resolution=32"); 
    Mat frame; 
    cap >> frame; 
    wr.open("test.avi", CV_FOURCC('P','I','M','1'), 29.92, frame.size(), true); 
    if(!wr.isOpened()) 
    { 
    cout << "Video writer open failed" << endl; 
    return(-1); 
    } 
    Mat dst = Mat::zeros(frame.rows + HEADER_HEIGHT, frame.cols, CV_8UC3); 
    Mat roi(dst, Rect(0, HEADER_HEIGHT-1, frame.cols, frame.rows)); 
    Mat head(dst, Rect(0,0,frame.cols, HEADER_HEIGHT)); 
    Mat zhead = Mat::zeros(head.rows, head.cols, CV_8UC3); 
    namedWindow("test", 1); 
    time_t tnow; 
    tm *tS; 
    double t1 = (double)getTickCount(); 
    double t2; 
    for(int i = 0; i>-1 ; i++) // infinite loop 
    { 
    cap >> frame; 
    if(!frame.data) 
     break; 
    tnow = time(0); 
    tS = localtime(&tnow); 
    frame.copyTo(roi); 
    std::ostringstream L1, L2; 
    L1 << tS->tm_year+1900 << " " << coutPrep << tS->tm_mon+1 << " "; 
    L1 << coutPrep << tS->tm_mday << " "; 
    L1 << coutPrep << tS->tm_hour; 
    L1 << ":" << coutPrep << tS->tm_min << ":" << coutPrep << tS->tm_sec; 
    actValueStr = L1.str(); 
    zhead.copyTo(head); 
    putText(dst, actValueStr, Point(0,HEADER_HEIGHT/2), fontFace, fontScale, Scalar(0,255,0), fontThickness, 8); 
    L2 << "Frame: " << i; 
    t2 = (double)getTickCount(); 
    L2 << " " << (t2 - t1)/getTickFrequency()*1000. << " ms"; 
    t1 = (double)getTickCount(); 
    actValueStr = L2.str(); 
    putText(dst, actValueStr, Point(0,HEADER_HEIGHT), fontFace, fontScale, Scalar(0,255,0), fontThickness, 8); 
    imshow("test", dst); 
    wr << dst; // write frame to file 
    cout << "Frame: " << i << endl; 
    if(waitKey(30) >= 0) 
     break; 
    } 

下面是列出的錯誤當它運行正常:

Opening 10.10.1.10 
Using network protocols without global network initialization. Please use avformat_network_init(), this will become mandatory later. 
Using network protocols without global network initialization. Please use avformat_network_init(), this will become mandatory later. 
[asf @ 0x701de0] max_analyze_duration reached 
[asf @ 0x701de0] Estimating duration from bitrate, this may be inaccurate 
[asf @ 0x701de0] ignoring invalid packet_obj_size (21084 656 21720 21740) 
[asf @ 0x701de0] freeing incomplete packet size 21720, new 21696 
[asf @ 0x701de0] ff asf bad header 0 at:1029744 
[asf @ 0x701de0] ff asf skip 678 (unknown stream) 
[asf @ 0x701de0] ff asf bad header 45 at:1030589 
[asf @ 0x701de0] packet_obj_size invalid 
[asf @ 0x701de0] ff asf bad header 29 at:1049378 
[asf @ 0x701de0] packet_obj_size invalid 
[asf @ 0x701de0] freeing incomplete packet size 21820, new 21684 
[asf @ 0x701de0] freeing incomplete packet size 21684, new 21836 
Using network protocols without global network initialization. Please use avformat_network_init(), this will become mandatory later. 
Using network protocols without global network initialization. Please use avformat_network_init(), this will become mandatory later. 
[asf @ 0x701de0] Estimating duration from bitrate, this may be inaccurate 
Successfully opened network camera 
[swscaler @ 0x8cf400] No accelerated colorspace conversion found from yuv422p to bgr24. 
Output #0, avi, to 'test.avi': 
Stream #0.0: Video: mpeg1video (hq), yuv420p, 640x480, q=2-31, 19660 kb/s, 90k tbn, 29.97 tbc 
[swscaler @ 0x9d25c0] No accelerated colorspace conversion found from yuv422p to bgr24. 
Frame: 0 
[swscaler @ 0xa89f20] No accelerated colorspace conversion found from yuv422p to bgr24. 
Frame: 1 
[swscaler @ 0x7f7840] No accelerated colorspace conversion found from yuv422p to bgr24. 
Frame: 2 
[swscaler @ 0xb9e6c0] No accelerated colorspace conversion found from yuv422p to bgr24. 
Frame: 3 

有時它掛在第一Estimating duration from bitrate聲明

+0

你確定這不是照相機問題嗎?你可以嘗試用明亮的光線指向相機,然後檢查幀率嗎? – Sunius

回答

0

你試過之後刪除寫入磁盤的代碼?當磁盤緩衝區填滿時,我發現USB相機的性能問題非常類似。起初幀率很高,然後大幅下降。

如果這是問題,另一個選擇是將壓縮編解碼器更改爲壓縮更顯着的東西。

0

快速初始FPS變爲慢速FPS將表明相機正在增加曝光時間以補償光線不足的拍攝對象。相機正在分析前幾幀,然後相應地調整曝光時間。

看來,實際的FPS是兩兩件事的組合:

  1. 硬件限制(規定最大FPS)
  2. 所需的光照時間(定義最小FPS)

硬件可能具有傳輸X FPS所需的帶寬,但光線不足的情況下可能需要曝光時間,從而減慢實際FPS。例如,如果每幀需要暴露0.1秒,則禁食FPS將爲10。

要測試這一點,請將照相機指向光線較暗的拍攝對象並將其與照相機指向光線較好的拍攝對象的FPS進行比較以測量FPS。一定要誇大照明條件,並讓相機幾秒鐘檢測所需的曝光。