2016-06-09 65 views
1

我想從QMediaPlayer保存一幀圖像。閱讀完文檔後,我明白我應該使用QVideoProbe。我使用下面的代碼:如何使用QMediaPlayer保存幀?

QMediaPlayer *player = new QMediaPlayer(); 
QVideoProbe *probe = new QVideoProbe; 

connect(probe, SIGNAL(videoFrameProbed(QVideoFrame)), this, SLOT(processFrame(QVideoFrame))); 

qDebug()<<probe->setSource(player); // Returns true, hopefully. 

player->setVideoOutput(myVideoSurface); 
player->setMedia(QUrl::fromLocalFile("observation.mp4")); 
player->play(); // Start receving frames as they get presented to myVideoSurface 

但不幸的是,probe->setSource(player)總是返回false我,所以我的插槽processFrame不會被觸發。

我在做什麼錯?有沒有人有一個QVideoProbe的工作示例?

+0

我有同樣的問題,我設法找到一個解決辦法,我會盡快發佈一個答案(這是一個有點長,雖然) – IAmInPLS

+0

OK,非常感謝你,我awating您的重播 在我的上一個職位關於QMediaPlayer [鏈接](http://stackoverflow.com/questions/37680515/qmediaplayer-duration-error),我對你有興趣,請看看 – user3627553

回答

3

你沒有做錯什麼。正如@Dangangu指出的,你的媒體對象實例不支持監視視頻。我有同樣的問題(和QAudioProbe相同,但它並不在我們這裏感興趣)。我通過查看this答案和this one找到了解決方案。

主要想法是子類QAbstractVideoSurface。完成之後,它會調用您的QAbstractVideoSurface實現的方法QAbstractVideoSurface::present(const QVideoFrame & frame),您將能夠處理視頻的幀。

據傳here,通常你只需要重新實現兩個方法:

  1. supportedPixelFormats,這樣生產者可以選擇爲QVideoFrame
  2. present允許顯示幀的適當格式

但是當時我在Qt源代碼中搜索並愉快地找到了this piece of code這幫助我做了一個完整的實現。所以,這裏是使用「視頻幀採集器」的完整代碼

VideoFrameGrabber。CPP:

#include "VideoFrameGrabber.h" 

#include <QtWidgets> 
#include <qabstractvideosurface.h> 
#include <qvideosurfaceformat.h> 

VideoFrameGrabber::VideoFrameGrabber(QWidget *widget, QObject *parent) 
    : QAbstractVideoSurface(parent) 
    , widget(widget) 
    , imageFormat(QImage::Format_Invalid) 
{ 
} 

QList<QVideoFrame::PixelFormat> VideoFrameGrabber::supportedPixelFormats(QAbstractVideoBuffer::HandleType handleType) const 
{ 
    Q_UNUSED(handleType); 
    return QList<QVideoFrame::PixelFormat>() 
     << QVideoFrame::Format_ARGB32 
     << QVideoFrame::Format_ARGB32_Premultiplied 
     << QVideoFrame::Format_RGB32 
     << QVideoFrame::Format_RGB24 
     << QVideoFrame::Format_RGB565 
     << QVideoFrame::Format_RGB555 
     << QVideoFrame::Format_ARGB8565_Premultiplied 
     << QVideoFrame::Format_BGRA32 
     << QVideoFrame::Format_BGRA32_Premultiplied 
     << QVideoFrame::Format_BGR32 
     << QVideoFrame::Format_BGR24 
     << QVideoFrame::Format_BGR565 
     << QVideoFrame::Format_BGR555 
     << QVideoFrame::Format_BGRA5658_Premultiplied 
     << QVideoFrame::Format_AYUV444 
     << QVideoFrame::Format_AYUV444_Premultiplied 
     << QVideoFrame::Format_YUV444 
     << QVideoFrame::Format_YUV420P 
     << QVideoFrame::Format_YV12 
     << QVideoFrame::Format_UYVY 
     << QVideoFrame::Format_YUYV 
     << QVideoFrame::Format_NV12 
     << QVideoFrame::Format_NV21 
     << QVideoFrame::Format_IMC1 
     << QVideoFrame::Format_IMC2 
     << QVideoFrame::Format_IMC3 
     << QVideoFrame::Format_IMC4 
     << QVideoFrame::Format_Y8 
     << QVideoFrame::Format_Y16 
     << QVideoFrame::Format_Jpeg 
     << QVideoFrame::Format_CameraRaw 
     << QVideoFrame::Format_AdobeDng; 
} 

bool VideoFrameGrabber::isFormatSupported(const QVideoSurfaceFormat &format) const 
{ 
    const QImage::Format imageFormat = QVideoFrame::imageFormatFromPixelFormat(format.pixelFormat()); 
    const QSize size = format.frameSize(); 

    return imageFormat != QImage::Format_Invalid 
      && !size.isEmpty() 
      && format.handleType() == QAbstractVideoBuffer::NoHandle; 
} 

bool VideoFrameGrabber::start(const QVideoSurfaceFormat &format) 
{ 
    const QImage::Format imageFormat = QVideoFrame::imageFormatFromPixelFormat(format.pixelFormat()); 
    const QSize size = format.frameSize(); 

    if (imageFormat != QImage::Format_Invalid && !size.isEmpty()) { 
     this->imageFormat = imageFormat; 
     imageSize = size; 
     sourceRect = format.viewport(); 

     QAbstractVideoSurface::start(format); 

     widget->updateGeometry(); 
     updateVideoRect(); 

     return true; 
    } else { 
     return false; 
    } 
} 

void VideoFrameGrabber::stop() 
{ 
    currentFrame = QVideoFrame(); 
    targetRect = QRect(); 

    QAbstractVideoSurface::stop(); 

    widget->update(); 
} 

bool VideoFrameGrabber::present(const QVideoFrame &frame) 
{ 
    if (frame.isValid()) 
    { 
     QVideoFrame cloneFrame(frame); 
     cloneFrame.map(QAbstractVideoBuffer::ReadOnly); 
     const QImage image(cloneFrame.bits(), 
          cloneFrame.width(), 
          cloneFrame.height(), 
          QVideoFrame::imageFormatFromPixelFormat(cloneFrame .pixelFormat())); 
     emit frameAvailable(image); // this is very important 
     cloneFrame.unmap(); 
    } 

    if (surfaceFormat().pixelFormat() != frame.pixelFormat() 
      || surfaceFormat().frameSize() != frame.size()) { 
     setError(IncorrectFormatError); 
     stop(); 

     return false; 
    } else { 
     currentFrame = frame; 

     widget->repaint(targetRect); 

     return true; 
    } 
} 

void VideoFrameGrabber::updateVideoRect() 
{ 
    QSize size = surfaceFormat().sizeHint(); 
    size.scale(widget->size().boundedTo(size), Qt::KeepAspectRatio); 

    targetRect = QRect(QPoint(0, 0), size); 
    targetRect.moveCenter(widget->rect().center()); 
} 

void VideoFrameGrabber::paint(QPainter *painter) 
{ 
    if (currentFrame.map(QAbstractVideoBuffer::ReadOnly)) { 
     const QTransform oldTransform = painter->transform(); 

     if (surfaceFormat().scanLineDirection() == QVideoSurfaceFormat::BottomToTop) { 
      painter->scale(1, -1); 
      painter->translate(0, -widget->height()); 
     } 

     QImage image(
       currentFrame.bits(), 
       currentFrame.width(), 
       currentFrame.height(), 
       currentFrame.bytesPerLine(), 
       imageFormat); 

     painter->drawImage(targetRect, image, sourceRect); 

     painter->setTransform(oldTransform); 

     currentFrame.unmap(); 
    } 
} 

VideoFrameGrabber.h

#ifndef VIDEOFRAMEGRABBER_H 
#define VIDEOFRAMEGRABBER_H 

#include <QtWidgets> 

class VideoFrameGrabber : public QAbstractVideoSurface 
{ 
    Q_OBJECT 

public: 
    VideoFrameGrabber(QWidget *widget, QObject *parent = 0); 

    QList<QVideoFrame::PixelFormat> supportedPixelFormats(
      QAbstractVideoBuffer::HandleType handleType = QAbstractVideoBuffer::NoHandle) const; 
    bool isFormatSupported(const QVideoSurfaceFormat &format) const; 

    bool start(const QVideoSurfaceFormat &format); 
    void stop(); 

    bool present(const QVideoFrame &frame); 

    QRect videoRect() const { return targetRect; } 
    void updateVideoRect(); 

    void paint(QPainter *painter); 

private: 
    QWidget *widget; 
    QImage::Format imageFormat; 
    QRect targetRect; 
    QSize imageSize; 
    QRect sourceRect; 
    QVideoFrame currentFrame; 

signals: 
    void frameAvailable(QImage frame); 
}; 
#endif //VIDEOFRAMEGRABBER_H 

注意:在.H,你會看到我添加了一個signal拍攝的圖像作爲參數。 這將允許您在代碼中的任何位置處理您的框架。當時,這個信號以QImage作爲參數,但如果您願意,您當然可以採取QVideoFrame


現在,我們已經準備好使用視頻圖像採集卡:

QMediaPlayer* player = new QMediaPlayer(this); 
// no more QVideoProbe 
VideoFrameGrabber* grabber = new VideoFrameGrabber(this); 
player->setVideoOutput(grabber); 

connect(grabber, SIGNAL(frameAvailable(QImage)), this, SLOT(processFrame(QImage))); 

現在,你只需要聲明一個插槽命名processFrame(QImage image),您將收到一個QImage每次將進入方法present您的VideoFrameGrabber

我希望這會幫助你!

+0

一些愚蠢的問題,之前我用QWIdget來顯示視頻 'player-> setVideoOutput(ui-> videoWidget);' 現在我必須寫如下? 'player = new QMediaPlayer(this); probe = new VideoFrameGrabber(ui-> videoWidget,this); player-> setVideoOutput(probe);' – user3627553

+0

就是這樣:'player = new QMediaPlayer(this); probe = new VideoFrameGrabber(this); player-> setVideoOutput(probe);' – IAmInPLS

1

Qt QVideoProbe documentation

bool QVideoProbe::setSource(QMediaObject *mediaObject) 

開始監視給出的mediaObject。

如果沒有與mediaObject關聯的媒體對象,或者如果它是 爲零,則此探針將被停用,並且此函數將返回 true。

如果媒體對象實例不支持監視視頻,則此函數將返回false。

任何先前監控的對象將不再被監控。在同一個對象中傳遞 將被忽略,但監視將繼續。

所以看起來你的「媒體對象實例不支持監控視頻」

+0

看看我的代碼 - 我不使用mediaRecorder 我使用QMediaPlayer。同時播放視頻 - 沒有問題 – user3627553