2011-12-13 87 views
16

我使用AVFoundation並獲得來自AVCaptureVideoDataOutput樣品緩衝液,我可以通過直接將其寫入videoWriter:的iOS - 規模和作物CMSampleBufferRef/CVImageBufferRef

- (void)writeBufferFrame:(CMSampleBufferRef)sampleBuffer { 
    CMTime lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);  
    if(self.videoWriter.status != AVAssetWriterStatusWriting) 
    { 
     [self.videoWriter startWriting]; 
     [self.videoWriter startSessionAtSourceTime:lastSampleTime]; 
    } 

    [self.videoWriterInput appendSampleBuffer:sampleBuffer]; 

} 

我想現在要做的就是裁剪和縮放CMSampleBufferRef中的圖像而不將其轉換爲UIImage或CGImageRef,因爲這會降低性能。

回答

21

如果您使用vimage,您可以直接處理緩衝區數據,而無需將其轉換爲任何圖像格式。

outImg包含裁剪和縮放的圖像數據。 outWidth和cropWidth之間的關係設置縮放比例。 vimage cropping

int cropX0, cropY0, cropHeight, cropWidth, outWidth, outHeight; 

CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);     
CVPixelBufferLockBaseAddress(imageBuffer,0); 
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer); 
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 

vImage_Buffer inBuff;      
inBuff.height = cropHeight; 
inBuff.width = cropWidth; 
inBuff.rowBytes = bytesPerRow; 

int startpos = cropY0*bytesPerRow+4*cropX0; 
inBuff.data = baseAddress+startpos; 

unsigned char *outImg= (unsigned char*)malloc(4*outWidth*outHeight); 
vImage_Buffer outBuff = {outImg, outHeight, outWidth, 4*outWidth}; 

vImage_Error err = vImageScale_ARGB8888(&inBuff, &outBuff, NULL, 0); 
if (err != kvImageNoError) NSLog(@" error %ld", err); 

所以設置cropX0 = 0和cropY0 = 0和cropWidth和cropHeight到原來的大小意味着沒有裁剪(使用整個原始圖像)。設置outWidth = cropWidth和outHeight = cropHeight不會導致縮放。請注意,inBuff.rowBytes應始終爲完整源緩衝區的長度,而不是裁剪的長度。

7

你可能會考慮使用CoreImage(5.0+)。

CIImage *ciImage = [CIImage imageWithCVPixelBuffer:CMSampleBufferGetImageBuffer(sampleBuffer) 
              options:[NSDictionary dictionaryWithObjectsAndKeys:[NSNull null], kCIImageColorSpace, nil]]; 
ciImage = [[ciImage imageByApplyingTransform:myScaleTransform] imageByCroppingToRect:myRect]; 
+2

然後,我怎樣才能將其轉換回CMSampleBuffer,或者我怎麼能寫下來使用self.videoWriterInput? – vodkhang 2011-12-14 01:36:19

+0

任何人有任何想法? – Ondrej 2012-05-02 20:23:01

+0

CIContext具有將CIImage柵格化爲CVPixelBuffer的方法。 – 2012-07-31 07:37:54

1

對於縮放,你可以讓AVFoundation爲你做這個。看我最近的帖子here。設置AVVideoWidth/AVVideoHeight鍵的值將縮放圖像,如果它們不是相同的尺寸。看看屬性here。作爲剪裁,我不確定你是否可以讓AVFoundation爲你做這個。您可能不得不訴諸使用OpenGL或CoreImage。在SO question的頂級帖子中有幾個好鏈接。

+0

我可以讓它自動縮放爲我,但它一直抱怨我內存不足,因爲你可以看看我最新的帖子在這裏http://stackoverflow.com/questions/8561456/ios-automatically-resize-cvpixelbufferref。這似乎是我不斷改變大小的原因 – vodkhang 2011-12-25 16:25:02

4

注意

我最近需要重新寫這個功能,並且發現這種方法似乎不工作了(至少我無法得到它在iOS 10.3.1工作)。輸出圖像不會對齊。我猜這是因爲錯誤的bytesPerRow


原始回答

緩衝區是簡單的像素的陣列,所以可以實際上不使用VIMAGE直接處理該緩衝器。代碼是用Swift編寫的,但我認爲很容易找到Objective-C的等價物。

斯威夫特3

let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)! 

CVPixelBufferLockBaseAddress(imageBuffer, .readOnly) 

let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer) 
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer) 
let cropWidth = 640 
let cropHeight = 640 
let colorSpace = CGColorSpaceCreateDeviceRGB() 

let context = CGContext(data: baseAddress, width: cropWidth, height: cropHeight, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue | CGBitmapInfo.byteOrder32Little.rawValue) 

CVPixelBufferUnlockBaseAddress(imageBuffer, .readOnly) 

// create image 
let cgImage: CGImage = context!.makeImage()! 
let image = UIImage(cgImage: cgImage) 

斯威夫特2

let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)! 

CVPixelBufferLockBaseAddress(imageBuffer, 0) 

let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer) 
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer) 
let cropWidth = 640 
let cropHeight = 640 
let colorSpace = CGColorSpaceCreateDeviceRGB() 

let context = CGBitmapContextCreate(baseAddress, cropWidth, cropHeight, 8, bytesPerRow, colorSpace, CGImageAlphaInfo.NoneSkipFirst.rawValue | CGBitmapInfo.ByteOrder32Little.rawValue) 

// create image 
let cgImage: CGImageRef = CGBitmapContextCreateImage(context)! 
let image = UIImage(CGImage: cgImage) 

如果你想從某個特定位置剪裁,添加以下代碼:

// calculate start position 
let bytesPerPixel = 4 
let startPoint = [ "x": 10, "y": 10 ] 
let startAddress = baseAddress + startPoint["y"]! * bytesPerRow + startPoint["x"]! * bytesPerPixel 

和變化baseAddress in CGBitmapContextCreate into startAddress。確保不要超過原圖像的寬度和高度。

0

請嘗試對Swift3

func resize(_ destSize: CGSize)-> CVPixelBuffer? { 
     guard let imageBuffer = CMSampleBufferGetImageBuffer(self) else { return nil } 
     // Lock the image buffer 
     CVPixelBufferLockBaseAddress(imageBuffer, CVPixelBufferLockFlags(rawValue: 0)) 
     // Get information about the image 
     let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer) 
     let bytesPerRow = CGFloat(CVPixelBufferGetBytesPerRow(imageBuffer)) 
     let height = CGFloat(CVPixelBufferGetHeight(imageBuffer)) 
     let width = CGFloat(CVPixelBufferGetWidth(imageBuffer)) 
     var pixelBuffer: CVPixelBuffer? 
     let options = [kCVPixelBufferCGImageCompatibilityKey:true, 
         kCVPixelBufferCGBitmapContextCompatibilityKey:true] 
     let topMargin = (height - destSize.height)/CGFloat(2) 
     let leftMargin = (width - destSize.width) * CGFloat(2) 
     let baseAddressStart = Int(bytesPerRow * topMargin + leftMargin) 
     let addressPoint = baseAddress!.assumingMemoryBound(to: UInt8.self) 
     let status = CVPixelBufferCreateWithBytes(kCFAllocatorDefault, Int(destSize.width), Int(destSize.height), kCVPixelFormatType_32BGRA, &addressPoint[baseAddressStart], Int(bytesPerRow), nil, nil, options as CFDictionary, &pixelBuffer) 
     if (status != 0) { 
      print(status) 
      return nil; 
     } 
     CVPixelBufferUnlockBaseAddress(imageBuffer,CVPixelBufferLockFlags(rawValue: 0)) 
     return pixelBuffer; 
    }