2012-11-02 44 views
8

我試圖在原始線程中回答這個問題,但是SO不會讓我。希望有更多權威的人可以將其融入原始問題。如何在iOS中將kCVPixelFormatType_420YpCbCr8BiPlanarFullRange緩衝區轉換爲UIImage

確定這裏是一個更完整的答案。首先,設置捕捉:

// Create capture session 
self.captureSession = [[AVCaptureSession alloc] init]; 

[self.captureSession setSessionPreset:AVCaptureSessionPresetPhoto]; 

// Setup capture input 
self.inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; 
AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput deviceInputWithDevice:self.inputDevice 
                      error:nil]; 
[self.captureSession addInput:captureInput]; 

// Setup video processing (capture output) 
AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init]; 
// Don't add frames to the queue if frames are already processing 
captureOutput.alwaysDiscardsLateVideoFrames = YES; 

// Create a serial queue to handle processing of frames 
_videoQueue = dispatch_queue_create("cameraQueue", NULL); 
[captureOutput setSampleBufferDelegate:self queue:_videoQueue]; 

// Set the video output to store frame in YUV 
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey; 

NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange]; 
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; 
[captureOutput setVideoSettings:videoSettings]; 
[self.captureSession addOutput:captureOutput]; 

OK現在委託/回調的實現:

- (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
    fromConnection:(AVCaptureConnection *)connection 
{ 

// Create autorelease pool because we are not in the main_queue 
@autoreleasepool { 

    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 

    //Lock the imagebuffer 
    CVPixelBufferLockBaseAddress(imageBuffer,0); 

    // Get information about the image 
    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); 

    // size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer); 
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 

    CVPlanarPixelBufferInfo_YCbCrBiPlanar *bufferInfo = (CVPlanarPixelBufferInfo_YCbCrBiPlanar *)baseAddress; 

    // This just moved the pointer past the offset 
    baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); 


    // convert the image 
    _prefImageView.image = [self makeUIImage:baseAddress bufferInfo:bufferInfo width:width height:height bytesPerRow:bytesPerRow]; 

    // Update the display with the captured image for DEBUG purposes 
    dispatch_async(dispatch_get_main_queue(), ^{ 
     [_myMainView.yUVImage setImage:_prefImageView.image]; 
    });   
} 

,並終於在這裏是從YUV轉換爲一個UIImage

- (UIImage *)makeUIImage:(uint8_t *)inBaseAddress bufferInfo:(CVPlanarPixelBufferInfo_YCbCrBiPlanar *)inBufferInfo width:(size_t)inWidth height:(size_t)inHeight bytesPerRow:(size_t)inBytesPerRow { 

NSUInteger yPitch = EndianU32_BtoN(inBufferInfo->componentInfoY.rowBytes); 

uint8_t *rgbBuffer = (uint8_t *)malloc(inWidth * inHeight * 4); 
uint8_t *yBuffer = (uint8_t *)inBaseAddress; 
uint8_t val; 
int bytesPerPixel = 4; 

// for each byte in the input buffer, fill in the output buffer with four bytes 
// the first byte is the Alpha channel, then the next three contain the same 
// value of the input buffer 
for(int y = 0; y < inHeight*inWidth; y++) 
{ 
    val = yBuffer[y]; 
    // Alpha channel 
    rgbBuffer[(y*bytesPerPixel)] = 0xff; 

    // next three bytes same as input 
    rgbBuffer[(y*bytesPerPixel)+1] = rgbBuffer[(y*bytesPerPixel)+2] = rgbBuffer[y*bytesPerPixel+3] = val; 
} 

// Create a device-dependent RGB color space 
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 

CGContextRef context = CGBitmapContextCreate(rgbBuffer, yPitch, inHeight, 8, 
              yPitch*bytesPerPixel, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast); 

CGImageRef quartzImage = CGBitmapContextCreateImage(context); 

CGContextRelease(context); 
CGColorSpaceRelease(colorSpace); 

UIImage *image = [UIImage imageWithCGImage:quartzImage]; 

CGImageRelease(quartzImage); 
free(rgbBuffer); 
return image; 
} 

方法您還需要#import "Endian.h"

請注意,調用CGBitmapContextCrea我的預期要困難得多。我對視頻處理並不是很瞭解,但是這個電話讓我困惑了一會兒。然後當它終於起作用時,它就像魔術一樣。

+0

我剛剛花了兩天的時間試圖在緩衝區中寫入UIImage以添加到視頻中我明白你的興奮! –

+0

@NicolasManzini此解決方案適合您嗎?我得到':copy_read_only:vm_copy失敗:狀態1.'看起來相關http://stackoverflow.com/questions/3367849/cgbitmapcontextcreateimage-vm-copy-failed-iphone-sdk –

+0

檢查您的位圖的大小上下文也許。但我正在做另一種方式CGContextDrawImage(...) –

回答

2

背景信息:@ Michaelg的版本只訪問y緩衝區,所以你只能得到亮度而不是顏色。如果緩衝區中的音調和像素數不匹配,則它也有緩衝區溢出錯誤(無論出於何種原因,在一行的末尾填充字節)。這裏發生的背景是這是一種平面圖像格式,它爲亮度分配每個像素一個字節,爲彩色信息分配每4個像素2個字節。而不是連續存儲在內存中,這些存儲爲「平面」,其中Y或亮度平面具有其自己的存儲器塊,並且CbCr或顏色平面也具有其自己的存儲器塊。 CbCr平面由Y平面的樣本數量(半高和寬度)的1/4組成,並且CbCr平面中的每個像素對應於Y平面中的2×2塊。希望這個背景有幫助。

編輯:他的版本和我的舊版本都有可能超出緩衝區,如果圖像緩衝區中的行在每行末尾有填充字節,將無法工作。此外,我的cbcr平面緩衝區不是用正確的偏移量創建的。要做到這一點,你應該總是使用核心視頻功能,如CVPixelBufferGetWidthOfPlane和CVPixelBufferGetBaseAddressOfPlane。這將確保您正確解釋緩衝區,並且無論緩衝區是否有標題以及是否將指針數學搞亂,它都能正常工作。您應該使用Apple函數的行大小和函數的緩衝區基址。這些記錄在:https://developer.apple.com/library/prerelease/ios/documentation/QuartzCore/Reference/CVPixelBufferRef/index.html請注意,雖然這個版本在這裏使用了一些Apple的函數和一些頭部的使用,但最好只使用Apple的函數。我可能會在將來更新它,根本不使用標題。

這會將kcvpixelformattype_420ypcbcr8biplanarfullrange緩衝區緩衝區轉換爲UIImage,然後您可以使用它。

首先,設置捕捉:

// Create capture session 
self.captureSession = [[AVCaptureSession alloc] init]; 

[self.captureSession setSessionPreset:AVCaptureSessionPresetPhoto]; 

// Setup capture input 
self.inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; 
AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput deviceInputWithDevice:self.inputDevice 
                      error:nil]; 
[self.captureSession addInput:captureInput]; 

// Setup video processing (capture output) 
AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init]; 
// Don't add frames to the queue if frames are already processing 
captureOutput.alwaysDiscardsLateVideoFrames = YES; 

// Create a serial queue to handle processing of frames 
_videoQueue = dispatch_queue_create("cameraQueue", NULL); 
[captureOutput setSampleBufferDelegate:self queue:_videoQueue]; 

// Set the video output to store frame in YUV 
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey; 

NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange]; 
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; 
[captureOutput setVideoSettings:videoSettings]; 
[self.captureSession addOutput:captureOutput]; 

OK現在委託/回調的實現:

- (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
    fromConnection:(AVCaptureConnection *)connection 
{ 

// Create autorelease pool because we are not in the main_queue 
@autoreleasepool { 

    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 

    //Lock the imagebuffer 
    CVPixelBufferLockBaseAddress(imageBuffer,0); 

    // Get information about the image 
    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); 

    // size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer); 
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 

    CVPlanarPixelBufferInfo_YCbCrBiPlanar *bufferInfo = (CVPlanarPixelBufferInfo_YCbCrBiPlanar *)baseAddress; 
    //get the cbrbuffer base address 
    uint8_t* cbrBuff = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 1); 
    // This just moved the pointer past the offset 
    baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); 


    // convert the image 
    _prefImageView.image = [self makeUIImage:baseAddress cBCrBuffer:cbrBuff bufferInfo:bufferInfo width:width height:height bytesPerRow:bytesPerRow]; 

    // Update the display with the captured image for DEBUG purposes 
    dispatch_async(dispatch_get_main_queue(), ^{ 
     [_myMainView.yUVImage setImage:_prefImageView.image]; 
    });   
} 

,並終於在這裏是從YUV轉換爲一個UIImage

方法
- (UIImage *)makeUIImage:(uint8_t *)inBaseAddress cBCrBuffer:(uint8_t*)cbCrBuffer bufferInfo:(CVPlanarPixelBufferInfo_YCbCrBiPlanar *)inBufferInfo width:(size_t)inWidth height:(size_t)inHeight bytesPerRow:(size_t)inBytesPerRow { 

    NSUInteger yPitch = EndianU32_BtoN(inBufferInfo->componentInfoY.rowBytes); 
NSUInteger cbCrOffset = EndianU32_BtoN(inBufferInfo->componentInfoCbCr.offset); 
uint8_t *rgbBuffer = (uint8_t *)malloc(inWidth * inHeight * 4); 
NSUInteger cbCrPitch = EndianU32_BtoN(inBufferInfo->componentInfoCbCr.rowBytes); 
uint8_t *yBuffer = (uint8_t *)inBaseAddress; 
//uint8_t *cbCrBuffer = inBaseAddress + cbCrOffset; 
uint8_t val; 
int bytesPerPixel = 4; 

for(int y = 0; y < inHeight; y++) 
{ 
uint8_t *rgbBufferLine = &rgbBuffer[y * inWidth * bytesPerPixel]; 
uint8_t *yBufferLine = &yBuffer[y * yPitch]; 
uint8_t *cbCrBufferLine = &cbCrBuffer[(y >> 1) * cbCrPitch]; 

for(int x = 0; x < inWidth; x++) 
{ 
int16_t y = yBufferLine[x]; 
int16_t cb = cbCrBufferLine[x & ~1] - 128; 
int16_t cr = cbCrBufferLine[x | 1] - 128; 

uint8_t *rgbOutput = &rgbBufferLine[x*bytesPerPixel]; 

    int16_t r = (int16_t)roundf(y + cr * 1.4); 
    int16_t g = (int16_t)roundf(y + cb * -0.343 + cr * -0.711); 
    int16_t b = (int16_t)roundf(y + cb * 1.765); 

//ABGR 
rgbOutput[0] = 0xff; 
    rgbOutput[1] = clamp(b); 
    rgbOutput[2] = clamp(g); 
    rgbOutput[3] = clamp(r); 
} 
} 

// Create a device-dependent RGB color space 
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 
NSLog(@"ypitch:%lu inHeight:%zu bytesPerPixel:%d",(unsigned long)yPitch,inHeight,bytesPerPixel); 
NSLog(@"cbcrPitch:%lu",cbCrPitch); 
CGContextRef context = CGBitmapContextCreate(rgbBuffer, inWidth, inHeight, 8, 
inWidth*bytesPerPixel, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast); 

CGImageRef quartzImage = CGBitmapContextCreateImage(context); 

CGContextRelease(context); 
CGColorSpaceRelease(colorSpace); 

UIImage *image = [UIImage imageWithCGImage:quartzImage]; 

CGImageRelease(quartzImage); 
free(rgbBuffer); 
return image; 
} 

您還需要#import "Endian.h"和定義#define clamp(a) (a>255?255:(a<0?0:a));

請注意,對CGBitmapContextCreate的調用比我想象的要複雜得多。我對視頻處理並不是很瞭解,但是這個電話讓我困惑了一會兒。然後當它終於起作用時,它就像魔術一樣。

+0

如果您在AVCaptureConnection中更改'videoOrientation',則此代碼不起作用。檢查[這個答案](http://stackoverflow.com/a/31553521/16)瞭解更多信息。 –

相關問題