我看過這個問題問過幾次,但沒有一個似乎有任何工作的答案。AVFoundation - 反轉AVAsset並輸出視頻文件
要求是扭轉和輸出的視頻文件(不只是反向播放它)保持相同的壓縮,格式,和幀速率作爲源視頻。
理想情況下,解決方案將能夠在內存或緩衝區中完成所有操作,並避免將幀生成爲圖像文件(例如:使用AVAssetImageGenerator
),然後重新編譯它(資源密集,不可靠的時序結果,幀/原始圖像質量等)。
-
我的貢獻: 這仍無法正常工作,但最好我試過到目前爲止:
- 讀取樣品架使用
AVAssetReader
的CMSampleBufferRef[]
的數組。 - 使用
AVAssetWriter
以相反順序將其寫回。 - 問題:好像每個幀定時被保存在
CMSampleBufferRef
因此,即使追加它們向後將不起作用。 - 接下來,我嘗試用反向/鏡像幀交換每個幀的時間信息。
- 問題:這導致與
AVAssetWriter
未知的錯誤。 下一步:我要去看看
AVAssetWriterInputPixelBufferAdaptor
- (AVAsset *)assetByReversingAsset:(AVAsset *)asset { NSURL *tmpFileURL = [NSURL URLWithString:@"/tmp/test.mp4"]; NSError *error; // initialize the AVAssetReader that will read the input asset track AVAssetReader *reader = [[AVAssetReader alloc] initWithAsset:asset error:&error]; AVAssetTrack *videoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] lastObject]; AVAssetReaderTrackOutput* readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:videoTrack outputSettings:nil]; [reader addOutput:readerOutput]; [reader startReading]; // Read in the samples into an array NSMutableArray *samples = [[NSMutableArray alloc] init]; while(1) { CMSampleBufferRef sample = [readerOutput copyNextSampleBuffer]; if (sample == NULL) { break; } [samples addObject:(__bridge id)sample]; CFRelease(sample); } // initialize the the writer that will save to our temporary file. CMFormatDescriptionRef formatDescription = CFBridgingRetain([videoTrack.formatDescriptions lastObject]); AVAssetWriterInput *writerInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:nil sourceFormatHint:formatDescription]; CFRelease(formatDescription); AVAssetWriter *writer = [[AVAssetWriter alloc] initWithURL:tmpFileURL fileType:AVFileTypeMPEG4 error:&error]; [writerInput setExpectsMediaDataInRealTime:NO]; [writer addInput:writerInput]; [writer startSessionAtSourceTime:CMSampleBufferGetPresentationTimeStamp((__bridge CMSampleBufferRef)samples[0])]; [writer startWriting]; // Traverse the sample frames in reverse order for(NSInteger i = samples.count-1; i >= 0; i--) { CMSampleBufferRef sample = (__bridge CMSampleBufferRef)samples[i]; // Since the timing information is built into the CMSampleBufferRef // We will need to make a copy of it with new timing info. Will copy // the timing data from the mirror frame at samples[samples.count - i -1] CMItemCount numSampleTimingEntries; CMSampleBufferGetSampleTimingInfoArray((__bridge CMSampleBufferRef)samples[samples.count - i -1], 0, nil, &numSampleTimingEntries); CMSampleTimingInfo *timingInfo = malloc(sizeof(CMSampleTimingInfo) * numSampleTimingEntries); CMSampleBufferGetSampleTimingInfoArray((__bridge CMSampleBufferRef)sample, numSampleTimingEntries, timingInfo, &numSampleTimingEntries); CMSampleBufferRef sampleWithCorrectTiming; CMSampleBufferCreateCopyWithNewTiming( kCFAllocatorDefault, sample, numSampleTimingEntries, timingInfo, &sampleWithCorrectTiming); if (writerInput.readyForMoreMediaData) { [writerInput appendSampleBuffer:sampleWithCorrectTiming]; } CFRelease(sampleWithCorrectTiming); free(timingInfo); } [writer finishWriting]; return [AVAsset assetWithURL:tmpFileURL]; }
我不認爲這是可能的,因爲路視頻壓縮的工作......從我的理解,你只能前進,從一個關鍵幀,而不是倒退..不計算關鍵幀 – Bastian
@Bastian之間的所有幀你能詳細說一下你的意思嗎?我有存儲在數組中的每個幀的原始樣本數據(CMSampleBufferRef)。 –
僅供參考,任何人閱讀本文。我想通了,並會在接下來的幾天發佈一個答案。 –