2011-11-09 135 views
23

我從CVPixelBuffer獲取UIIMage時遇到了一些問題。這就是我想:如何將CVPixelBuffer轉換爲UIImage?

CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(imageDataSampleBuffer); 
CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, imageDataSampleBuffer, kCMAttachmentMode_ShouldPropagate); 
CIImage *ciImage = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:(NSDictionary *)attachments]; 
if (attachments) 
    CFRelease(attachments); 
size_t width = CVPixelBufferGetWidth(pixelBuffer); 
size_t height = CVPixelBufferGetHeight(pixelBuffer); 
if (width && height) { // test to make sure we have valid dimensions 
    UIImage *image = [[UIImage alloc] initWithCIImage:ciImage]; 

    UIImageView *lv = [[UIImageView alloc] initWithFrame:self.view.frame]; 
    lv.contentMode = UIViewContentModeScaleAspectFill; 
    self.lockedView = lv; 
    [lv release]; 
    self.lockedView.image = image; 
    [image release]; 
} 
[ciImage release]; 

heightwidth都正確地安裝在相機的分辨率。 image被創建,但我似乎是黑色的(或者可能是透明的?)。我不太明白問題出在哪裏。任何想法,將不勝感激。

+0

您絕對需要在兩者之間放置CIImage,例如因爲你要拋出一些中間的CIFilter,或者只是去CGBitmapContextCreate - > UIImage? – Tommy

+0

現在,我只想在視圖中顯示它並查看我正在處理的內容。在路上,我想玩像素。 – mahboudz

回答

36

首先,與您的問題沒有直接關係的顯而易見的東西:AVCaptureVideoPreviewLayer是將視頻從任一相機傳輸到獨立視圖的最便宜方式,如果這是數據來自哪裏,而您沒有立即計劃修改它。您無需自己動手,預覽圖層直接連接至AVCaptureSession並自行更新。我不得不承認對中心問題缺乏信心。 CIImage與其他兩種類型的圖像之間存在語義差異 - CIImage是圖像的配方,不一定由像素支持。它可以像「從這裏獲取像素,像這樣轉換,應用此濾鏡,像這樣轉換,與其他圖像合併,應用此濾鏡」。系統不知道什麼CIImage看起來像,直到你選擇渲染它。它本身並不知道對其進行光柵化的適當界限。

UIImage聲稱僅包裝CIImage。它不會將其轉換爲像素。大概UIImageView應該實現這一點,但如果是的話,我似乎無法找到你在哪裏提供適當的輸出矩形。

我已經成功只是圍繞這一問題與閃躲:

CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer]; 

CIContext *temporaryContext = [CIContext contextWithOptions:nil]; 
CGImageRef videoImage = [temporaryContext 
        createCGImage:ciImage 
        fromRect:CGRectMake(0, 0, 
          CVPixelBufferGetWidth(pixelBuffer), 
          CVPixelBufferGetHeight(pixelBuffer))]; 

UIImage *uiImage = [UIImage imageWithCGImage:videoImage]; 
CGImageRelease(videoImage); 

隨着給出了一個明顯的機會,以指定輸出的矩形。我確信有一條路線,不用CGImage作爲中介,所以請不要以爲這個解決方案是最佳實踐。

+0

謝謝,我會試試看。預覽圖層無用的原因是我需要更多的分辨率。我之所以用CIIImage而不是jpeg表示,是爲了瞭解jpeg壓縮是否會增加重要的工件。實際上,如果工件很小,我可能會選擇使用jpeg。 – mahboudz

+0

它的工作原理。謝謝! – mahboudz

11

另一種獲取UIImage的方法。執行〜10倍的速度,至少在我的情況:

int w = CVPixelBufferGetWidth(pixelBuffer); 
int h = CVPixelBufferGetHeight(pixelBuffer); 
int r = CVPixelBufferGetBytesPerRow(pixelBuffer); 
int bytesPerPixel = r/w; 

unsigned char *buffer = CVPixelBufferGetBaseAddress(pixelBuffer); 

UIGraphicsBeginImageContext(CGSizeMake(w, h)); 

CGContextRef c = UIGraphicsGetCurrentContext(); 

unsigned char* data = CGBitmapContextGetData(c); 
if (data != NULL) { 
    int maxY = h; 
    for(int y = 0; y<maxY; y++) { 
     for(int x = 0; x<w; x++) { 
     int offset = bytesPerPixel*((w*y)+x); 
     data[offset] = buffer[offset];  // R 
     data[offset+1] = buffer[offset+1]; // G 
     data[offset+2] = buffer[offset+2]; // B 
     data[offset+3] = buffer[offset+3]; // A 
     } 
    } 
} 
UIImage *img = UIGraphicsGetImageFromCurrentImageContext(); 

UIGraphicsEndImageContext(); 
+0

您應該使用增量指針,這將使您獲得微小的速度提升 – jjxtra

+6

您需要在調用CVPixelBufferGetBaseAddress之前插入對CVPixelBufferLockBaseAddress的調用,並在數據複製後調用CVPixelBufferUnlockBaseAddress。你也可以考慮使用CVPixelBufferGetDataSize和memcpy()來執行數據的單個塊拷貝。 –

8

除非你的圖像數據在不同的一些格式,需要調配或轉換 - 我建議沒有任何東西遞增......不僅僅是數據嫌到您的上下文內存區與memcpy如:

//not here... unsigned char *buffer = CVPixelBufferGetBaseAddress(pixelBuffer); 

UIGraphicsBeginImageContext(CGSizeMake(w, h)); 

CGContextRef c = UIGraphicsGetCurrentContext(); 

void *ctxData = CGBitmapContextGetData(c); 

// MUST READ-WRITE LOCK THE PIXEL BUFFER!!!! 
CVPixelBufferLockBaseAddress(pixelBuffer, 0); 
void *pxData = CVPixelBufferGetBaseAddress(pixelBuffer); 
memcpy(ctxData, pxData, 4 * w * h); 
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0); 

... and so on... 
+0

與CGImageCreate路徑相比,我在老設備上獲得了〜50%的fps提升。謝謝! –

+2

但要小心,因爲CVPixelBuffer中的行尾通常會有填充字節。即CVPixelBufferGetBytesPerRow可能會超出您的預期。然後,您的複製圖像輸出將顯示所有slanty。 – Baxissimo

3

以前的方法導致我有CG柵格數據泄漏。這種轉換方法對我來說不會泄漏:

@autoreleasepool { 

    CGImageRef cgImage = NULL; 
    OSStatus res = CreateCGImageFromCVPixelBuffer(pixelBuffer,&cgImage); 
    if (res == noErr){ 
     UIImage *image= [UIImage imageWithCGImage:cgImage scale:1.0 orientation:UIImageOrientationUp]; 

    } 
    CGImageRelease(cgImage); 
} 


    static OSStatus CreateCGImageFromCVPixelBuffer(CVPixelBufferRef pixelBuffer, CGImageRef *imageOut) 
    { 
     OSStatus err = noErr; 
     OSType sourcePixelFormat; 
     size_t width, height, sourceRowBytes; 
     void *sourceBaseAddr = NULL; 
     CGBitmapInfo bitmapInfo; 
     CGColorSpaceRef colorspace = NULL; 
     CGDataProviderRef provider = NULL; 
     CGImageRef image = NULL; 

     sourcePixelFormat = CVPixelBufferGetPixelFormatType(pixelBuffer); 
     if (kCVPixelFormatType_32ARGB == sourcePixelFormat) 
      bitmapInfo = kCGBitmapByteOrder32Big | kCGImageAlphaNoneSkipFirst; 
     else if (kCVPixelFormatType_32BGRA == sourcePixelFormat) 
      bitmapInfo = kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst; 
     else 
      return -95014; // only uncompressed pixel formats 

     sourceRowBytes = CVPixelBufferGetBytesPerRow(pixelBuffer); 
     width = CVPixelBufferGetWidth(pixelBuffer); 
     height = CVPixelBufferGetHeight(pixelBuffer); 

     CVPixelBufferLockBaseAddress(pixelBuffer, 0); 
     sourceBaseAddr = CVPixelBufferGetBaseAddress(pixelBuffer); 

     colorspace = CGColorSpaceCreateDeviceRGB(); 

     CVPixelBufferRetain(pixelBuffer); 
     provider = CGDataProviderCreateWithData((void *)pixelBuffer, sourceBaseAddr, sourceRowBytes * height, ReleaseCVPixelBuffer); 
     image = CGImageCreate(width, height, 8, 32, sourceRowBytes, colorspace, bitmapInfo, provider, NULL, true, kCGRenderingIntentDefault); 

     if (err && image) { 
      CGImageRelease(image); 
      image = NULL; 
     } 
     if (provider) CGDataProviderRelease(provider); 
     if (colorspace) CGColorSpaceRelease(colorspace); 
     *imageOut = image; 
     return err; 
    } 

    static void ReleaseCVPixelBuffer(void *pixel, const void *data, size_t size) 
    { 
     CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)pixel; 
     CVPixelBufferUnlockBaseAddress(pixelBuffer, 0); 
     CVPixelBufferRelease(pixelBuffer); 
    } 
0

在Swift中試試這個。

extension UIImage { 
    public convenience init?(pixelBuffer: CVPixelBuffer) { 
     var cgImage: CGImage? 
     VTCreateCGImageFromCVPixelBuffer(pixelBuffer, nil, &cgImage) 

     if let cgImage = cgImage { 
      self.init(cgImage: cgImage) 
     } else { 
      return nil 
     } 
    } 
} 

注意:這隻適用於RGB像素的緩衝區,而不是灰度。