通過一點挖掘,我發現了一個庫,它在寫入時從.mp4文件中提取NAL單元。我正嘗試使用libavformat
和libavcodec
將此信息打包到flv上。我設置使用的視頻流:iOS上的RTMP上的H264視頻流
-(void)setupVideoStream {
int ret = 0;
videoCodec = avcodec_find_decoder(STREAM_VIDEO_CODEC);
if (videoCodec == nil) {
NSLog(@"Could not find encoder %i", STREAM_VIDEO_CODEC);
return;
}
videoStream = avformat_new_stream(oc, videoCodec);
videoCodecContext = videoStream->codec;
videoCodecContext->codec_type = AVMEDIA_TYPE_VIDEO;
videoCodecContext->codec_id = STREAM_VIDEO_CODEC;
videoCodecContext->pix_fmt = AV_PIX_FMT_YUV420P;
videoCodecContext->profile = FF_PROFILE_H264_BASELINE;
videoCodecContext->bit_rate = 512000;
videoCodecContext->bit_rate_tolerance = 0;
videoCodecContext->width = STREAM_WIDTH;
videoCodecContext->height = STREAM_HEIGHT;
videoCodecContext->time_base.den = STREAM_TIME_BASE;
videoCodecContext->time_base.num = 1;
videoCodecContext->gop_size = STREAM_GOP;
videoCodecContext->has_b_frames = 0;
videoCodecContext->ticks_per_frame = 2;
videoCodecContext->qcompress = 0.6;
videoCodecContext->qmax = 51;
videoCodecContext->qmin = 10;
videoCodecContext->max_qdiff = 4;
videoCodecContext->i_quant_factor = 0.71;
if (oc->oformat->flags & AVFMT_GLOBALHEADER)
videoCodecContext->flags |= CODEC_FLAG_GLOBAL_HEADER;
videoCodecContext->extradata = avcCHeader;
videoCodecContext->extradata_size = avcCHeaderSize;
ret = avcodec_open2(videoStream->codec, videoCodec, NULL);
if (ret < 0)
NSLog(@"Could not open codec!");
}
然後我連接,並且每個文庫中提取NALU時,它返回一個數組保持一個或兩個的NALU到我RTMPClient
。該處理實際流的方法是這樣的:
-(void)writeNALUToStream:(NSArray*)data time:(double)pts {
int ret = 0;
uint8_t *buffer = NULL;
int bufferSize = 0;
// Number of NALUs within the data array
int numNALUs = [data count];
// First NALU
NSData *fNALU = [data objectAtIndex:0];
int fLen = [fNALU length];
// If there is more than one NALU...
if (numNALUs > 1) {
// Second NALU
NSData *sNALU = [data objectAtIndex:1];
int sLen = [sNALU length];
// Allocate a buffer the size of first data and second data
buffer = av_malloc(fLen + sLen);
// Copy the first data bytes of fLen into the buffer
memcpy(buffer, [fNALU bytes], fLen);
// Copy the second data bytes of sLen into the buffer + fLen + 1
memcpy(buffer + fLen + 1, [sNALU bytes], sLen);
// Update the size of the buffer
bufferSize = fLen + sLen;
}else {
// Allocate a buffer the size of first data
buffer = av_malloc(fLen);
// Copy the first data bytes of fLen into the buffer
memcpy(buffer, [fNALU bytes], fLen);
// Update the size of the buffer
bufferSize = fLen;
}
// Initialize the packet
av_init_packet(&pkt);
//av_packet_from_data(&pkt, buffer, bufferSize);
// Set the packet data to the buffer
pkt.data = buffer;
pkt.size = bufferSize;
pkt.pts = pts;
// Stream index 0 is the video stream
pkt.stream_index = 0;
// Add a key frame flag every 15 frames
if ((processedFrames % 15) == 0)
pkt.flags |= AV_PKT_FLAG_KEY;
// Write the frame to the stream
ret = av_interleaved_write_frame(oc, &pkt);
if (ret < 0)
NSLog(@"Error writing frame %i to stream", processedFrames);
else {
// Update the number of frames successfully streamed
frameCount++;
// Update the number of bytes successfully sent
bytesSent += pkt.size;
}
// Update the number of frames processed
processedFrames++;
// Update the number of bytes processed
processedBytes += pkt.size;
free((uint8_t*)buffer);
// Free the packet
av_free_packet(&pkt);
}
之後大約100左右的幀,我得到一個錯誤: malloc: *** error for object 0xe5bfa0: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug
似乎我不能以阻止發生。我試過註釋av_free_packet()
方法和free()
以及嘗試使用av_packet_from_data()
,而不是初始化數據包並設置數據和大小值。
我的問題是;我怎麼能阻止這個錯誤發生,根據wireshark,這些都是正確的RTMP h264數據包,但它們不會播放比黑屏更多的東西。我忽略了一些明顯的錯誤嗎?
您是否找到了解決方案?我試圖達到同樣的目的。當我嘗試播放視頻時,我會看到一些綠框。你用什麼庫來閱讀NALU?謝謝。 –