2011-04-12 48 views
36

我必須從我的iPhone應用程序導出包含來自NSArray的UIImage的影片,並添加一些.caf格式的音頻文件,這些文件必須在預先指定的時間開始。 現在,我已經能夠使用AVAssetWriter(經過這個和其他網站上的許多問題和答案)導出包含圖像的視頻部分,但似乎找不到一種方法來添加音頻文件來完成電影。AVFoundation + AssetWriter:使用圖像和音頻生成影片

這是我迄今爲止得到

-(void) writeImagesToMovieAtPath:(NSString *) path withSize:(CGSize) size 
{ 
    NSLog(@"Write Started"); 

    NSError *error = nil; 

    AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL: 
           [NSURL fileURLWithPath:path] fileType:AVFileTypeQuickTimeMovie 
                  error:&error];  
    NSParameterAssert(videoWriter); 

    NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys: 
           AVVideoCodecH264, AVVideoCodecKey, 
           [NSNumber numberWithInt:size.width], AVVideoWidthKey, 
           [NSNumber numberWithInt:size.height], AVVideoHeightKey, 
           nil]; 

    AVAssetWriterInput* videoWriterInput = [[AVAssetWriterInput 
            assetWriterInputWithMediaType:AVMediaTypeVideo 
            outputSettings:videoSettings] retain]; 


    AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor 
              assetWriterInputPixelBufferAdaptorWithAssetWriterInput:videoWriterInput 
               sourcePixelBufferAttributes:nil]; 

    NSParameterAssert(videoWriterInput); 
    NSParameterAssert([videoWriter canAddInput:videoWriterInput]); 
    videoWriterInput.expectsMediaDataInRealTime = YES; 
    [videoWriter addInput:videoWriterInput]; 

    //Start a session: 
    [videoWriter startWriting]; 
    [videoWriter startSessionAtSourceTime:kCMTimeZero]; 

    CVPixelBufferRef buffer = NULL; 

    //convert uiimage to CGImage. 

    int frameCount = 0; 

    for(UIImage * img in imageArray) 
    { 
      buffer = [self pixelBufferFromCGImage:[img CGImage] andSize:size]; 

      BOOL append_ok = NO; 
      int j = 0; 
      while (!append_ok && j < 30) 
      { 
       if (adaptor.assetWriterInput.readyForMoreMediaData) 
       { 
        printf("appending %d attemp %d\n", frameCount, j); 

        CMTime frameTime = CMTimeMake(frameCount,(int32_t) kRecordingFPS); 
        append_ok = [adaptor appendPixelBuffer:buffer withPresentationTime:frameTime]; 

        if(buffer) 
         CVBufferRelease(buffer); 
        [NSThread sleepForTimeInterval:0.05]; 
       } 
       else 
       { 
        printf("adaptor not ready %d, %d\n", frameCount, j); 
        [NSThread sleepForTimeInterval:0.1]; 
       } 
       j++; 
      } 
      if (!append_ok) { 
       printf("error appending image %d times %d\n", frameCount, j); 
      } 
      frameCount++; 
     } 
    } 

    //Finish the session: 
    [videoWriterInput markAsFinished]; 
    [videoWriter finishWriting]; 
    NSLog(@"Write Ended"); 
} 

而且現在的代碼pixelBufferFromCGImage

- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image andSize:(CGSize) size 
{ 
    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: 
         [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey, 
         [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, 
         nil]; 
    CVPixelBufferRef pxbuffer = NULL; 

    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width, 
             size.height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options, 
             &pxbuffer); 
    NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL); 

    CVPixelBufferLockBaseAddress(pxbuffer, 0); 
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer); 
    NSParameterAssert(pxdata != NULL); 

    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB(); 
    CGContextRef context = CGBitmapContextCreate(pxdata, size.width, 
              size.height, 8, 4*size.width, rgbColorSpace, 
              kCGImageAlphaNoneSkipFirst); 
    NSParameterAssert(context); 
    CGContextConcatCTM(context, CGAffineTransformMakeRotation(0)); 
    CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), 
              CGImageGetHeight(image)), image); 
    CGColorSpaceRelease(rgbColorSpace); 
    CGContextRelease(context); 

    CVPixelBufferUnlockBaseAddress(pxbuffer, 0); 

    return pxbuffer; 
} 

所以,你可以幫助我關於如何添加音頻文件,以及如何使緩衝區爲他們和適配器和輸入設置等

如果這種方法可能會導致問題指導我如何使用AVMutableComposition使用圖像數組視頻輸出

+0

好,我已經能夠但是添加使用AVAssetReaders和AVAssetWriterInputs的音頻文件,當我添加音頻文件時,它們會在沒有任何暫停的情況下一個接一個地開始(一個完成並且一個啓動一個文本),而不是在預定時間開始,那麼如何告訴AVAssetWriter在某個時間接受輸入。這是因爲據我所知[startSessionAtSourceTime]是用於確定源的時間,而不是目的地電影中的時間,因此,任何提示 – MuTaTeD 2011-04-14 13:29:37

+0

對於爲其他人發佈這樣的詳細解決方案,您是非常棒的。 – TigerCoding 2011-11-30 13:43:23

+0

這也適用於1080 * 1920的圖像?因爲我已經實現了相同的代碼,並且在720 * 1280(720/16)下運行良好,但是不能處理那些導致浮點值(視頻寬度/ 16)任何建議的視頻寬度? – 2015-12-04 16:33:48

回答

17

我最終使用上面的代碼單獨導出視頻,並使用AVComposition & AVExportSession單獨添加音頻文件。 下面是代碼

-(void) addAudioToFileAtPath:(NSString *) filePath toPath:(NSString *)outFilePath 
{ 
    NSError * error = nil; 

    AVMutableComposition * composition = [AVMutableComposition composition]; 


    AVURLAsset * videoAsset = [AVURLAsset URLAssetWithURL:[NSURL fileURLWithPath:filePath] options:nil]; 

    AVAssetTrack * videoAssetTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0]; 

    AVMutableCompositionTrack *compositionVideoTrack = [composition addMutableTrackWithMediaType:AVMediaTypeVideo 
                       preferredTrackID: kCMPersistentTrackID_Invalid]; 

    [compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero,videoAsset.duration) ofTrack:videoAssetTrack atTime:kCMTimeZero 
            error:&error];  

    CMTime audioStartTime = kCMTimeZero; 
    for (NSDictionary * audioInfo in audioInfoArray) 
    { 
     NSString * pathString = [audioInfo objectForKey:audioFilePath]; 
     AVURLAsset * urlAsset = [AVURLAsset URLAssetWithURL:[NSURL fileURLWithPath:pathString] options:nil]; 

     AVAssetTrack * audioAssetTrack = [[urlAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0]; 
     AVMutableCompositionTrack *compositionAudioTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio 
                        preferredTrackID: kCMPersistentTrackID_Invalid]; 

     [compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero,urlAsset.duration) ofTrack:audioAssetTrack atTime:audioStartTime error:&error];  

     audioStartTime = CMTimeAdd(audioStartTime, CMTimeMake((int) (([[audioInfo objectForKey:audioDuration] floatValue] * kRecordingFPS) + 0.5), kRecordingFPS)); 
    } 
    AVAssetExportSession* assetExport = [[AVAssetExportSession alloc] initWithAsset:composition presetName:AVAssetExportPresetMediumQuality]; 
    assetExport.videoComposition = mutableVideoComposition; 

    assetExport.outputFileType =AVFileTypeQuickTimeMovie;// @"com.apple.quicktime-movie"; 
    assetExport.outputURL = [NSURL fileURLWithPath:outFilePath]; 

    [assetExport exportAsynchronouslyWithCompletionHandler: 
    ^(void) { 
     switch (assetExport.status) 
     { 
      case AVAssetExportSessionStatusCompleted: 
//    export complete 
       NSLog(@"Export Complete"); 
       break; 
      case AVAssetExportSessionStatusFailed: 
       NSLog(@"Export Failed"); 
       NSLog(@"ExportSessionError: %@", [assetExport.error localizedDescription]); 
//    export error (see exportSession.error) 
       break; 
      case AVAssetExportSessionStatusCancelled: 
       NSLog(@"Export Failed"); 
       NSLog(@"ExportSessionError: %@", [assetExport.error localizedDescription]); 
//    export cancelled 
       break; 
     } 
    }];  
} 
+0

您可以用單個「audioInfo」字典替換「for」循環嗎?它具有需要設置的所有值,以便它更容易複製粘貼? :) – 2011-06-12 16:51:19

+0

@Chintan Patel:我必須在生成的電影中添加不同長度的不同音頻文件(不同長度的音頻文件),因此我爲每個音頻文件製作了字典包括並將它們全部添加到數組(audioInfoArray)。 audioInfo字典分別包含以下鍵** audioFilePath **和** audioDuration **,_NSString_&_float_。 – MuTaTeD 2011-06-12 20:34:47

+0

謝謝,那會。 – 2011-06-12 20:40:53

4

能否請您更換,其具有需要進行設置,使之成爲更加便於複製和粘貼的所有值的單個「audioInfo」字典中的「for」循環? :)

如果你只是想添加一個單一的音頻文件,下面的代碼應更換for循環:

NSString * pathString = [self getAudioFilePath]; 
AVURLAsset * urlAsset = [AVURLAsset URLAssetWithURL:[NSURL fileURLWithPath:pathString] options:nil]; 

AVAssetTrack * audioAssetTrack = [[urlAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0]; 
AVMutableCompositionTrack *compositionAudioTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio 
                preferredTrackID: kCMPersistentTrackID_Invalid]; 

[compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero,urlAsset.duration) ofTrack:audioAssetTrack atTime:kCMTimeZero error:&error];  
相關問題