2012-04-27 82 views
1

我正在使用以下代碼,使用AVAsset編寫器從靜態16:9圖像製作視頻。問題是出於某種原因,製作的視頻採用4:3格式。創建16:9視頻而不是4:3 - AVAsset Writer - iPhone

任何人都可以提出一種方法,我可以修改代碼以生成16:9視頻,或者,我如何將4:3視頻轉換爲16:9。

謝謝

- (void) createVideoFromStillImage 
{ 
//Set the size according to the device type (iPhone or iPad). 
CGSize size = CGSizeMake(screenWidth, screenHeight); 

NSString *betaCompressionDirectory = [NSHomeDirectory() stringByAppendingPathComponent:@"Documents/IntroVideo.mov"]; 

NSError *error = nil; 

unlink([betaCompressionDirectory UTF8String]); 

//----initialize compression engine 
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:betaCompressionDirectory] 
                 fileType:AVFileTypeQuickTimeMovie 
                  error:&error]; 
NSParameterAssert(videoWriter); 
if(error) 
    NSLog(@"error = %@", [error localizedDescription]); 

NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys: AVVideoCodecH264,AVVideoCodecKey, 
           [NSNumber numberWithInt:size.height], AVVideoWidthKey, 
           [NSNumber numberWithInt:size.width], AVVideoHeightKey, nil]; 
AVAssetWriterInput *writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings]; 

NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: 
                 [NSNumber numberWithInt:kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey, nil]; 

AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput 
                               sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary]; 
NSParameterAssert(writerInput); 
NSParameterAssert([videoWriter canAddInput:writerInput]); 

if ([videoWriter canAddInput:writerInput]) 
    NSLog(@"I can add this input"); 
else 
    NSLog(@"i can't add this input"); 

[videoWriter addInput:writerInput]; 

[videoWriter startWriting]; 
[videoWriter startSessionAtSourceTime:kCMTimeZero]; 

//CGImageRef theImage = [finishedMergedImage CGImage]; 
CGImageRef theImage = [introImage CGImage]; 

//dispatch_queue_t dispatchQueue = dispatch_queue_create("mediaInputQueue", NULL); 
int __block   frame = 0; 

//Calculate how much progress % one frame completion represents. Maximum of 75%. 
float currentProgress = 0.0; 
float progress = (80.0/kDurationOfIntroOutro);  
//NSLog(@"Progress is %f", progress); 

for (int i=0; i<=kDurationOfIntroOutro; i++) { 

    //Update our progress view for every frame that is generated. 
    [self updateProgressView:currentProgress]; 
    currentProgress +=progress; 

    //NSLog(@"CurrentProgress is %f", currentProgress); 

    frame++; 
    [NSThread sleepForTimeInterval:0.05]; //Delay to allow buffer to be ready. 
    CVPixelBufferRef buffer = (CVPixelBufferRef)[self pixelBufferFromCGImage:theImage size:size]; 
    if (buffer) { 
     if (adaptor.assetWriterInput.readyForMoreMediaData) 
    { 
     if(![adaptor appendPixelBuffer:buffer withPresentationTime:CMTimeMake(frame, 20)]) 
      NSLog(@"FAIL"); 
     else 
      NSLog(@"Success:%d", frame); 
     CFRelease(buffer); 
    } 
    } 
} 

[writerInput markAsFinished]; 
[videoWriter finishWriting]; 
[videoWriter release]; 

//NSLog(@"outside for loop"); 
//Grab the URL for the video so we can use it later. 
NSURL * url = [self applicationDocumentsDirectory : kIntroVideoFileName]; 
[assetURLArray setObject:url forKey:kIntroVideo]; 

} 

- (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image size:(CGSize)size 
{ 
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: 
         [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey, 
         [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, nil]; 
CVPixelBufferRef pxbuffer = NULL; 
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width, size.height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options, &pxbuffer); 
// CVReturn status = CVPixelBufferPoolCreatePixelBuffer(NULL, adaptor.pixelBufferPool, &pxbuffer); 

NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL); 

CVPixelBufferLockBaseAddress(pxbuffer, 0); 
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer); 
NSParameterAssert(pxdata != NULL); 

CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB(); 
CGContextRef context = CGBitmapContextCreate(pxdata, size.width, size.height, 8, 4*size.width, rgbColorSpace, kCGImageAlphaPremultipliedFirst); 
NSParameterAssert(context); 

CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image)), image); 

CGColorSpaceRelease(rgbColorSpace); 
CGContextRelease(context); 

CVPixelBufferUnlockBaseAddress(pxbuffer, 0); 

return pxbuffer; 
} 
+0

您的'videoSettings'應該使用屏幕寬度和高度嗎?你不想讓它成爲你的視頻寬度和高度嗎?對於iOS設備,屏幕寬度和高度大致爲4:3寬高比。 – 2012-04-27 21:33:53

+0

布拉德 - 謝謝 - 所以我應該指定類似1280x720的東西? – GuybrushThreepwood 2012-04-27 21:40:34

+1

我會使用任何你想要的輸出視頻尺寸。至少,這就是我的代碼。 – 2012-04-27 21:43:55

回答

1

所以這可以被關閉的時候,我會重申我做了什麼上面。您使用的videoSettings字典應該使用視頻的目標尺寸,但是您傳遞的是視圖的尺寸。除非您想要記錄這些內容,否則您需要將AVVideoWidthKeyAVVideoWidthKey傳入的值更改爲正確的輸出大小。

鑑於iOS設備屏幕的縱橫比接近4:3,這可能導致錄製視頻的比例。

+0

我確定我以前在某處讀過 – GuybrushThreepwood 2012-04-28 17:45:21

相關問題