2014-10-16 40 views
3

我試圖只從靜態圖像寫入兩幀來構建視頻。我一直在徘徊在時間參數上。似乎最後一步finishWritingWithCompletionHandler從來沒有被稱爲(完成寫作...從來沒有輸出)。只有一個零字節.mp4視頻創建。並沒有發生錯誤。無法弄清楚原因。下面是我使用的代碼:iOS Swift:嘗試使用寫入圖像緩衝區來構建視頻,finishWritingWithCompletionHandler未被調用。輸出視頻包含零字節

func createBackgroundVideo(CompletionHandler: (path: String)->Void) { 

    var maybeError: NSError? 
    let fileMgr = NSFileManager.defaultManager() 
    let docDirectory = NSHomeDirectory().stringByAppendingPathComponent("Documents") 
    let videoOutputPath = docDirectory.stringByAppendingPathComponent(BgVideoName) 

    if (!fileMgr.removeItemAtPath(videoOutputPath, error: &maybeError)) { 
     NSLog("Umable to delete file: %@", maybeError!.localizedDescription) 
    } 

    println(videoOutputPath) 

    let videoWriter = AVAssetWriter(
     URL: NSURL(fileURLWithPath: videoOutputPath), 
     fileType: AVFileTypeQuickTimeMovie, 
     error: &maybeError 
    ) 

    var videoSettings = [ 
     AVVideoCodecKey: AVVideoCodecH264, 
     AVVideoWidthKey: NSNumber(float: Float(videoWidth)), 
     AVVideoHeightKey: NSNumber(float: Float(videoHeight)) 
    ] 

    var avAssetInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings) 
    avAssetInput.expectsMediaDataInRealTime = true 

    var adaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: avAssetInput, sourcePixelBufferAttributes: nil) 

    videoWriter.addInput(avAssetInput) 
    videoWriter.startWriting() 
    videoWriter.startSessionAtSourceTime(kCMTimeZero) 

    var frameCount: Int64 = 0; 
    var buffer: CVPixelBufferRef 

    //buffer = PixelBuffer.pixelBufferFromCGImage2(self.bgImage.CGImage, andSize: CGSizeMake(videoWidth, videoHeight)).takeUnretainedValue() 

    for i in 1...2 { 
     buffer = PixelBuffer.pixelBufferFromCGImage2(self.bgImage.CGImage, andSize: CGSizeMake(videoWidth, videoHeight)).takeUnretainedValue() 
     var appendOk = false 
     var retries: Int = 0 

     while (!appendOk && retries < 30) { 
      if (adaptor.assetWriterInput.readyForMoreMediaData) { 
       let frameTime = CMTimeMake(frameCount, 1); 
       appendOk = adaptor.appendPixelBuffer(buffer, withPresentationTime: frameTime) 
       if (!appendOk) { 
        println("some erorr occurred", videoWriter.error) 
       } else { 
        println("pixel written") 
       } 
      } else { 
       println("adaptor is not ready....") 
       NSThread.sleepForTimeInterval(0.1) 
      } 
      retries++ 
     } 

     if (!appendOk) { 
      println("Error appending image....") 
     } 

     frameCount++ 
    } 

    avAssetInput.markAsFinished() 
    videoWriter.finishWritingWithCompletionHandler({() -> Void in 
     println("finished writing...") 
     CompletionHandler(path: videoOutputPath) 
    }) 
} 

我打電話從用的OBJ-C CGImage方法像素緩衝器,(我已經添加頁眉和橋接頭,似乎工作正常):

+ (CVPixelBufferRef) pixelBufferFromCGImage2: (CGImageRef) image andSize:(CGSize) size { 

    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: 
          [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey, 
          [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, 
          nil]; 
    CVPixelBufferRef pxbuffer = NULL; 

    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, 
              size.width, 
              size.height, 
              kCVPixelFormatType_32ARGB, 
              (__bridge CFDictionaryRef) options, 
              &pxbuffer); 
    if (status != kCVReturnSuccess){ 
     NSLog(@"Failed to create pixel buffer"); 
    } 

    CVPixelBufferLockBaseAddress(pxbuffer, 0); 
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer); 

    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB(); 
    CGContextRef context = CGBitmapContextCreate(pxdata, size.width, 
               size.height, 8, 4*size.width, rgbColorSpace, 
               kCGImageAlphaPremultipliedFirst); 

    float offsetY = size.height/2 - CGImageGetHeight(image)/2; 
    float offsetX = size.width/2 - CGImageGetWidth(image)/2; 

    CGContextConcatCTM(context, CGAffineTransformMakeRotation(0)); 
    CGContextDrawImage(context, CGRectMake(offsetX, offsetY, CGImageGetWidth(image), 
              CGImageGetHeight(image)), image); 
    CGColorSpaceRelease(rgbColorSpace); 
    CGContextRelease(context); 

    CVPixelBufferUnlockBaseAddress(pxbuffer, 0); 

    return pxbuffer; 
} 

感謝您的閱讀。

+0

問題是什麼? – abhi1992 2016-01-13 07:36:46

回答

1

也許你的videoSettings字典不完整。嘗試設置更多的原子信息,如下所示:

var videoCleanApertureSettings = [AVVideoCleanApertureWidthKey:Int(self.width), 
           AVVideoCleanApertureHeightKey:Int(self.height), 
         AVVideoCleanApertureHorizontalOffsetKey:0, 
         AVVideoCleanApertureVerticalOffsetKey:0] 

var videoAspectRatioSettings = [AVVideoPixelAspectRatioHorizontalSpacingKey:1, 
            AVVideoPixelAspectRatioVerticalSpacingKey:1] 

var codecSettings = [AVVideoCleanApertureKey:videoCleanApertureSettings, 
        AVVideoPixelAspectRatioKey:videoAspectRatioSettings] 

var videoSettings = [AVVideoCodecKey:AVVideoCodecH264, 
    AVVideoCompressionPropertiesKey:codecSettings, 
        AVVideoWidthKey:Int(self.width), 
        AVVideoHeightKey:Int(self.height)] 

您在時間戳零開始播放視頻。沒關係:

[self.videoWriter startSessionAtSourceTime:kCMTimeZero]; 

也許你的視頻圖像的時間戳不夠遠,看不到東西。如果你需要一些秒顯示圖像,你可以做這樣的事情:

int64_t newFrameNumber = (uint64_t)(presentationTimeInSeconds * 60.); 
CMTime frameTime = CMTimeMake(newFrameNumber, 60); 

使用60時間表給你用秒爲單位憑藉良好的分辨率的機會。

爲了製造在「實時」幻燈片,則可以使用的NSDate編碼時間戳:

int64_t newFrameNumber = (uint64_t)(fabs([self.videoStartDate timeIntervalSinceNow]) * 60.); 

其中self.videoStartDate[NSDate date]值您啓動視頻後立即設置。

CMTime告訴解碼器顯示圖像,而不是多長時間來顯示它。您可以從frameCount開始,該值指示解碼器立即呈現第一張圖像。也許你會嘗試從開始,看看視頻稍後會顯示第一張圖片。

如果您使用的是startSessionAtSourceTime,則必須在致電finishWritingWithCompletionHandler之前以endSessionAtSourceTime結束視頻,否則可能不會調用閉包。將最後一次時間戳傳遞給endSessionAtSourceTime

你可以嘗試從蘋果的棄用方法,看看這可能是一個錯誤。 標記爲完成呼叫

videoWriter.finishWriting() 

代替finishWritingWithCompletionHandler並等待文件一點點後由diskwriter關閉。 (即通過使用調度隊列)

int64_t delayInSeconds = 1; 
dispatch_time_t popTime = dispatch_time(DISPATCH_TIME_NOW, delayInSeconds * NSEC_PER_SEC); 
dispatch_after(popTime, dispatch_get_main_queue(), ^(void){ 

    // call your completion handler after the file has been written 
}) 

這裏是SWIFT版本:

let delayInSeconds:Double = 0.5 
let popTime = dispatch_time(DISPATCH_TIME_NOW, Int64(delayInSeconds * Double(NSEC_PER_SEC))) 
dispatch_after(popTime, dispatch_get_main_queue(), { 

    println("finished writing...") 
    CompletionHandler(path: videoOutputPath) 
}) 

也許你videowriter實例不會再離開你的課後存在。 (這個塊是異步調用的,但你在你的函數中聲明瞭videowriter,ARC可能會在完成處理程序被調用之前釋放該對象。)聲明全局編寫器來解決這個問題。

提示:

讓您CGColorSpace在內存中(即創建一個類VAR或這裏靜止無功),因爲CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();需要很長的時間進行初始化。在對視頻進行編碼之前只做一次這樣做會顯着提高您的應用程序執行速度!