0
我想讓我的IP攝像頭流在我的Qt Widget應用程序中。首先,我連接到IP攝像機的UDP端口。 IP攝像機正在流式傳輸H.264編碼視頻。在socket被綁定之後,在每個readyRead()信號上,我正在用接收到的數據報填充緩衝區以獲得全幀。Qt - 使用FFmpeg庫的H.264視頻流
變量初始化:
AVCodec *codec;
AVCodecContext *codecCtx;
AVFrame *frame;
AVPacket packet;
this->buffer.clear();
this->socket = new QUdpSocket(this);
QObject::connect(this->socket, &QUdpSocket::connected, this, &H264VideoStreamer::connected);
QObject::connect(this->socket, &QUdpSocket::disconnected, this, &H264VideoStreamer::disconnected);
QObject::connect(this->socket, &QUdpSocket::readyRead, this, &H264VideoStreamer::readyRead);
QObject::connect(this->socket, &QUdpSocket::hostFound, this, &H264VideoStreamer::hostFound);
QObject::connect(this->socket, SIGNAL(error(QAbstractSocket::SocketError)), this, SLOT(error(QAbstractSocket::SocketError)));
QObject::connect(this->socket, &QUdpSocket::stateChanged, this, &H264VideoStreamer::stateChanged);
avcodec_register_all();
codec = avcodec_find_decoder(AV_CODEC_ID_H264);
if (!codec){
qDebug() << "Codec not found";
return;
}
codecCtx = avcodec_alloc_context3(codec);
if (!codecCtx){
qDebug() << "Could not allocate video codec context";
return;
}
if (codec->capabilities & CODEC_CAP_TRUNCATED)
codecCtx->flags |= CODEC_FLAG_TRUNCATED;
codecCtx->flags2 |= CODEC_FLAG2_CHUNKS;
AVDictionary *dictionary = nullptr;
if (avcodec_open2(codecCtx, codec, &dictionary) < 0) {
qDebug() << "Could not open codec";
return;
}
算法如下:
void H264VideoImageProvider::readyRead() {
QByteArray datagram;
datagram.resize(this->socket->pendingDatagramSize());
QHostAddress sender;
quint16 senderPort;
this->socket->readDatagram(datagram.data(), datagram.size(), &sender, &senderPort);
QByteArray rtpHeader = datagram.left(12);
datagram.remove(0, 12);
int nal_unit_type = datagram[0] & 0x1F;
bool start = (datagram[1] & 0x80) != 0;
int seqNo = rtpHeader[3] & 0xFF;
qDebug() << "H264 video decoder::readyRead()"
<< "from: " << sender.toString() << ":" << QString::number(senderPort)
<< "\n\tDatagram size: " << QString::number(datagram.size())
<< "\n\tH264 RTP header (hex): " << rtpHeader.toHex()
<< "\n\tH264 VIDEO data (hex): " << datagram.toHex();
qDebug() << "nal_unit_type = " << nal_unit_type << " - " << getNalUnitTypeStr(nal_unit_type);
if (start)
qDebug() << "START";
if (nal_unit_type == 7){
this->sps = datagram;
qDebug() << "Sequence parameter found = " << this->sps.toHex();
return;
} else if (nal_unit_type == 8){
this->pps = datagram;
qDebug() << "Picture parameter found = " << this->pps.toHex();
return;
}
//VIDEO_FRAME
if (start){
if (!this->buffer.isEmpty())
decodeBuf();
this->buffer.clear();
qDebug() << "Initializing new buffer...";
this->buffer.append(char(0x00));
this->buffer.append(char(0x00));
this->buffer.append(char(0x00));
this->buffer.append(char(0x01));
this->buffer.append(this->sps);
this->buffer.append(char(0x00));
this->buffer.append(char(0x00));
this->buffer.append(char(0x00));
this->buffer.append(char(0x01));
this->buffer.append(this->pps);
this->buffer.append(char(0x00));
this->buffer.append(char(0x00));
this->buffer.append(char(0x00));
this->buffer.append(char(0x01));
}
qDebug() << "Appending buffer data...";
this->buffer.append(datagram);
}
- 前12個字節的數據報是RTP首部
- 其他一切是視頻數據
- 最後5位第一個VIDEO DATA字節,表示它是哪個NAL單元類型。我總是得到以下4個值之一(1編碼非IDR片,5編碼IDR片,7 SPS,8 PPS)
- 第2個VIDEO DATA字節中的第5位說明該數據包是否爲幀中的START數據
- 所有視頻數據被存儲在緩衝器開始START
- 一旦新幀到達 - START被設置時,它被解碼並生成 新緩衝器被這樣產生
幀進行解碼:
SPS
PPS
級聯VIDEO DATA
解碼使用avcodec_decode_video2()函數從FFmpeg的文庫製成
void H264VideoStreamer::decode() { av_init_packet(&packet); av_new_packet(&packet, this->buffer.size()); memcpy(packet.data, this->buffer.data_ptr(), this->buffer.size()); packet.size = this->buffer.size(); frame = av_frame_alloc(); if(!frame){ qDebug() << "Could not allocate video frame"; return; } int got_frame = 1; int len = avcodec_decode_video2(codecCtx, frame, &got_frame, &packet); if (len < 0){ qDebug() << "Error while encoding frame."; return; } //if(got_frame > 0){ // got_frame is always 0 // qDebug() << "Data decoded: " << frame->data[0]; //} char * frameData = (char *) frame->data[0]; QByteArray decodedFrame; decodedFrame.setRawData(frameData, len); qDebug() << "Data decoded: " << decodedFrame; av_frame_unref(frame); av_free_packet(&packet); emit imageReceived(decodedFrame); }
我的想法是在接收imageReceived信號的UI線程中,直接在QImage中轉換decodeFrame,並在新幀解碼併發送到UI後刷新它。
這是解碼H.264流的好方法嗎?我面臨以下問題:
- avcodec_decode_video2()返回與編碼緩衝區大小相同的值。編碼和解碼日期是否可能總是相同的大小?
- got_frame始終爲0,所以這意味着我從來沒有真正收到全幀結果。可能是什麼原因?視頻幀錯誤地創建?或視頻幀錯誤地從QByteArray轉換爲AVframe?
- 如何將解碼後的AVframe轉換回QByteArray,並且它可以簡單地轉換爲QImage?
感謝您的建議,但我想堅持FFmpeg庫。是否有可能使用libvlc獲得udp流? – franz
我不確定。我想你可以根據我粘貼的鏈接中的評論。你可以檢查你自己,如果你打開VLC客戶端,並進入媒體 - >開放網絡流並粘貼你的鏈接。如果流開始,那麼你也可以用libvlc來做。 –
是的,這是有道理的,因爲VLC基於libVLC。那麼,非常感謝迄今爲止的答案,如果我沒有設法使用FFmpeg進行流式處理,仍然在等待答案,那麼這將是我的備份計劃。 – franz