我正在使用ffmpeg手動解碼h264 RTSP流,並試圖使用和AVAssertWriterInput
保存未壓縮的幀。無法將CMSampleBuffer附加到AVAssertWriterInput(錯誤-12780)
調用AVAssetWriterInput appendBuffer
時,我收到以下錯誤 -
錯誤域= AVFoundationErrorDomain代碼= -11800 「操作無法完成」 的UserInfo = {NSUnderlyingError = 0x170059530 {錯誤域= NSOSStatusErrorDomain代碼= -12780 「(空)」},NSLocalizedFailureReason =出現未知錯誤(-12780),NSLocalizedDescription =操作無法完成}
的CMSampleBuffer
包含BGRA框架,看起來像這樣 -
CMSampleBuffer 0x159d12900 retainCount: 1 allocator: 0x1b3aa3bb8
invalid = NO
dataReady = YES
makeDataReadyCallback = 0x0
makeDataReadyRefcon = 0x0
formatDescription = <CMVideoFormatDescription 0x17405bd50 [0x1b3aa3bb8]> {
mediaType:'vide'
mediaSubType:'BGRA'
mediaSpecific: {
codecType: 'BGRA'
dimensions: 720 x 1280
}
extensions: {<CFBasicHash 0x1742652c0 [0x1b3aa3bb8]>{type = immutable dict, count = 4,
entries =>
0 : <CFString 0x1addb17c8 [0x1b3aa3bb8]>{contents = "CVImageBufferYCbCrMatrix"} = <CFString 0x1addb1808 [0x1b3aa3bb8]>{contents = "ITU_R_601_4"}
1 : <CFString 0x1addb1928 [0x1b3aa3bb8]>{contents = "CVImageBufferTransferFunction"} = <CFString 0x1addb17e8 [0x1b3aa3bb8]>{contents = "ITU_R_709_2"}
2 : <CFString 0x1adde3800 [0x1b3aa3bb8]>{contents = "CVBytesPerRow"} = <CFNumber 0xb00000000000b402 [0x1b3aa3bb8]>{value = +2880, type = kCFNumberSInt32Type}
3 : <CFString 0x1adde3880 [0x1b3aa3bb8]>{contents = "Version"} = <CFNumber 0xb000000000000022 [0x1b3aa3bb8]>{value = +2, type = kCFNumberSInt32Type}
}
}
}
sbufToTrackReadiness = 0x0
numSamples = 1
sampleTimingArray[1] = {
{PTS = {3000/90000 = 0.033}, DTS = {INVALID}, duration = {INVALID}},
}
imageBuffer = 0x17413ebe0
我看着就以下問題及答案,以及,但它似乎並沒有解釋我遇到的問題(我使用的格式是支持的像素格式): Why won't AVFoundation accept my planar pixel buffers on an iOS device?
任何幫助將不勝感激!
僅供參考 - 當我保存BGRA CMSampleBuffer
s我從iPhone攝像頭獲得它只是工作,如果需要我可以粘貼一個示例CMSampleBuffer以及。
你是如何設置演示時間戳? –