2013-05-10 151 views
12

我在做視頻錄製使用AVFoundation。我必須裁剪視頻到320x280。我得到CMSampleBufferRef並使用下面的代碼將其轉換爲UIImage。將UIImage轉換爲CMSampleBufferRef

CGImageRef _cgImage = [self imageFromSampleBuffer:sampleBuffer]; 
UIImage *_uiImage = [UIImage imageWithCGImage:_cgImage]; 
CGImageRelease(_cgImage); 
_uiImage = [_uiImage resizedImageWithSize:CGSizeMake(320, 280)]; 

CMSampleBufferRef croppedBuffer = /* NEED HELP WITH THIS */ 

[_videoInput appendSampleBuffer:sampleBuffer]; 
// _videoInput is a AVAssetWriterInput 

imageFromSampleBuffer:方法是這樣的:

- (CGImageRef) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer // Create a CGImageRef from sample buffer data 
{ 
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
    CVPixelBufferLockBaseAddress(imageBuffer,0);  // Lock the image buffer 

    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); // Get information of the image 
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer); 
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 

    CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
    CGImageRef newImage = CGBitmapContextCreateImage(newContext); 
    CGContextRelease(newContext); 

    CGColorSpaceRelease(colorSpace); 
    CVPixelBufferUnlockBaseAddress(imageBuffer,0); 
    /* CVBufferRelease(imageBuffer); */ // do not call this! 

    return newImage; 
} 

現在我已經調整後的圖像轉換回CMSampleBufferRefAVAssetWriterInput寫。

如何轉換的UIImageCMSampleBufferRef

謝謝大家!

回答

9

雖然您可以從頭創建自己的Core Media樣本緩衝區,但使用AVPixelBufferAdaptor可能更容易。

你描述的inputSettings字典源像素緩衝區格式,並傳遞到適配器初始化:

NSMutableDictionary* inputSettingsDict = [NSMutableDictionary dictionary]; 
[inputSettingsDict setObject:[NSNumber numberWithInt:pixelFormat] forKey:(NSString*)kCVPixelBufferPixelFormatTypeKey]; 
[inputSettingsDict setObject:[NSNumber numberWithUnsignedInteger:(NSUInteger)(image.uncompressedSize/image.rect.size.height)] forKey:(NSString*)kCVPixelBufferBytesPerRowAlignmentKey]; 
[inputSettingsDict setObject:[NSNumber numberWithDouble:image.rect.size.width] forKey:(NSString*)kCVPixelBufferWidthKey]; 
[inputSettingsDict setObject:[NSNumber numberWithDouble:image.rect.size.height] forKey:(NSString*)kCVPixelBufferHeightKey]; 
[inputSettingsDict setObject:[NSNumber numberWithBool:YES] forKey:(NSString*)kCVPixelBufferCGImageCompatibilityKey]; 
[inputSettingsDict setObject:[NSNumber numberWithBool:YES] forKey:(NSString*)kCVPixelBufferCGBitmapContextCompatibilityKey]; 
AVAssetWriterInputPixelBufferAdaptor* pixelBufferAdapter = [[AVAssetWriterInputPixelBufferAdaptor alloc] initWithAssetWriterInput:assetWriterInput sourcePixelBufferAttributes:inputSettingsDict]; 

然後,您可以追加CVPixelBuffers到您的適配器:

[pixelBufferAdapter appendPixelBuffer:completePixelBuffer withPresentationTime:pixelBufferTime] 

的pixelbufferAdaptor接受CVPixelBuffers ,所以你必須將你的UIImage轉換爲pixelbuffers,這裏描述如下:https://stackoverflow.com/a/3742212/100848
通過你的的CGImage屬性至newPixelBufferFromCGImage

+2

不幸的是,這個AVPixelBufferAdaptor的文檔不存在,99%的蘋果公司寫的文檔記錄很差。雖然這個東西適用於從陣列中讀取的幀,但它對於實時視頻流卻失敗了。 – SpaceDog 2016-12-18 05:43:48

8

這是我在GPUImage框架使用的功能來調整傳入CMSampleBufferRef放置一個CVPixelBufferRef內的縮放效果您提供:

void GPUImageCreateResizedSampleBuffer(CVPixelBufferRef cameraFrame, CGSize finalSize, CMSampleBufferRef *sampleBuffer) 
{ 
    // CVPixelBufferCreateWithPlanarBytes for YUV input 

    CGSize originalSize = CGSizeMake(CVPixelBufferGetWidth(cameraFrame), CVPixelBufferGetHeight(cameraFrame)); 

    CVPixelBufferLockBaseAddress(cameraFrame, 0); 
    GLubyte *sourceImageBytes = CVPixelBufferGetBaseAddress(cameraFrame); 
    CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, sourceImageBytes, CVPixelBufferGetBytesPerRow(cameraFrame) * originalSize.height, NULL); 
    CGColorSpaceRef genericRGBColorspace = CGColorSpaceCreateDeviceRGB(); 
    CGImageRef cgImageFromBytes = CGImageCreate((int)originalSize.width, (int)originalSize.height, 8, 32, CVPixelBufferGetBytesPerRow(cameraFrame), genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst, dataProvider, NULL, NO, kCGRenderingIntentDefault); 

    GLubyte *imageData = (GLubyte *) calloc(1, (int)finalSize.width * (int)finalSize.height * 4); 

    CGContextRef imageContext = CGBitmapContextCreate(imageData, (int)finalSize.width, (int)finalSize.height, 8, (int)finalSize.width * 4, genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
    CGContextDrawImage(imageContext, CGRectMake(0.0, 0.0, finalSize.width, finalSize.height), cgImageFromBytes); 
    CGImageRelease(cgImageFromBytes); 
    CGContextRelease(imageContext); 
    CGColorSpaceRelease(genericRGBColorspace); 
    CGDataProviderRelease(dataProvider); 

    CVPixelBufferRef pixel_buffer = NULL; 
    CVPixelBufferCreateWithBytes(kCFAllocatorDefault, finalSize.width, finalSize.height, kCVPixelFormatType_32BGRA, imageData, finalSize.width * 4, stillImageDataReleaseCallback, NULL, NULL, &pixel_buffer); 
    CMVideoFormatDescriptionRef videoInfo = NULL; 
    CMVideoFormatDescriptionCreateForImageBuffer(NULL, pixel_buffer, &videoInfo); 

    CMTime frameTime = CMTimeMake(1, 30); 
    CMSampleTimingInfo timing = {frameTime, frameTime, kCMTimeInvalid}; 

    CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, pixel_buffer, YES, NULL, NULL, videoInfo, &timing, sampleBuffer); 
    CFRelease(videoInfo); 
    CVPixelBufferRelease(pixel_buffer); 
} 

它並不需要你創建一路一個CMSampleBufferRef,但正如weichsel指出的那樣,您只需要CVPixelBufferRef就可以對視頻進行編碼。

但是,如果你真的想在這裏做的是裁剪視頻並記錄它,去往和來自UIImage將是一個非常緩慢的方式來做到這一點。相反,我可以推薦使用類似GPUImage的視頻來捕獲具有GPUImageVideoCamera輸入(或GPUImageMovie,如果裁剪以前記錄的電影)的視頻,將其提供給GPUImageCropFilter並將結果提供給GPUImageMovieWriter。這樣,視頻永遠不會觸及Core Graphics,並儘可能使用硬件加速。它會比上面描述的要快得多。

+0

好的,那麼我們如何才能使用CVPixelBuffer添加文本或畫線? – user924 2018-03-01 13:25:18

1
- (CVPixelBufferRef)CVPixelBufferRefFromUiImage:(UIImage *)img { 

    CGSize size = img.size; 
    CGImageRef image = [img CGImage]; 

    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: 
          [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey, 
          [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, nil]; 
    CVPixelBufferRef pxbuffer = NULL; 
    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width, size.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options, &pxbuffer); 

    NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL); 

    CVPixelBufferLockBaseAddress(pxbuffer, 0); 
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer); 
    NSParameterAssert(pxdata != NULL); 

    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB(); 
    CGContextRef context = CGBitmapContextCreate(pxdata, size.width, size.height, 8, 4*size.width, rgbColorSpace, kCGImageAlphaPremultipliedFirst); 
    NSParameterAssert(context); 

    CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image)), image); 

    CGColorSpaceRelease(rgbColorSpace); 
    CGContextRelease(context); 

    CVPixelBufferUnlockBaseAddress(pxbuffer, 0); 

    return pxbuffer; 
} 
+0

請添加一些你的答案的解釋 – Mostafiz 2016-07-21 06:50:08