2014-02-21 30 views
11

我有一個使用AV基礎的照片應用程序。我使用AVCaptureVideoPreviewLayer設置了一個預覽圖層,該圖層佔據了屏幕的上半部分。所以當用戶試圖拍攝他們的照片時,他們只能看到屏幕上半部分所看到的內容。裁剪捕獲的圖像完全相同,它看起來在AVCaptureVideoPreviewLayer

這個效果很好,但是當用戶實際拍攝照片並嘗試將照片設置爲圖層內容時,圖像會失真。我做了研究,並意識到我需要裁剪圖像。

我想要做的就是裁剪完整捕捉的圖像,這樣所有剩下的就是用戶最初可以在屏幕上半部分看到的東西。

我已經能夠做到這一點,但我通過輸入手動CGRect值來做到這一點,它仍然看起來不完美。必須有一個更簡單的方法來做到這一點。

我已經從字面上看過去兩天關於裁剪圖像的堆棧溢出的每一篇文章,沒有任何工作。

必須有一種以編程方式裁剪捕獲圖像的方法,以便最終圖像與預覽圖層中最初看到的完全相同。

這裏是我的viewDidLoad中實現:

- (void)viewDidLoad 
{ 
    [super viewDidLoad]; 

    AVCaptureSession *session =[[AVCaptureSession alloc]init]; 
    [session setSessionPreset:AVCaptureSessionPresetPhoto]; 

    AVCaptureDevice *inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; 

    NSError *error = [[NSError alloc]init]; 
    AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:inputDevice error:&error]; 

    if([session canAddInput:deviceInput]) 
     [session addInput:deviceInput]; 

    CALayer *rootLayer = [[self view]layer]; 
    [rootLayer setMasksToBounds:YES]; 

    _previewLayer = [[AVCaptureVideoPreviewLayer alloc]initWithSession:session]; 
    [_previewLayer setFrame:CGRectMake(0, 0, rootLayer.bounds.size.width, rootLayer.bounds.size.height/2)]; 
    [_previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill]; 

    [rootLayer insertSublayer:_previewLayer atIndex:0]; 

    _stillImageOutput = [[AVCaptureStillImageOutput alloc] init]; 
    [session addOutput:_stillImageOutput]; 

    [session startRunning]; 
    } 

這裏是運行時,用戶按下按鈕,拍攝照片的代碼:

-(IBAction)stillImageCapture { 
    AVCaptureConnection *videoConnection = nil; 
    for (AVCaptureConnection *connection in _stillImageOutput.connections){ 
     for (AVCaptureInputPort *port in [connection inputPorts]){ 
      if ([[port mediaType] isEqual:AVMediaTypeVideo]){ 
       videoConnection = connection; 
       break; 
      } 
     } 
     if (videoConnection) { 
      break; 
     } 
    } 

    NSLog(@"about to request a capture from: %@", _stillImageOutput); 

    [_stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) { 
     if(imageDataSampleBuffer) { 
      NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer]; 

      UIImage *image = [[UIImage alloc]initWithData:imageData]; 
      CALayer *subLayer = [CALayer layer]; 
      subLayer.frame = _previewLayer.frame; 
      image = [self rotate:image andOrientation:image.imageOrientation]; 

      //Below is the crop that is sort of working for me, but as you can see I am manually entering in values and just guessing and it still does not look perfect. 
      CGRect cropRect = CGRectMake(0, 650, 3000, 2000); 
      CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], cropRect); 

      subLayer.contents = (id)[UIImage imageWithCGImage:imageRef].CGImage; 
      subLayer.frame = _previewLayer.frame; 

      [_previewLayer addSublayer:subLayer]; 
     } 
    }]; 
} 

回答

18

看一看AVCaptureVideoPreviewLayer小號

-(CGRect)metadataOutputRectOfInterestForRect:(CGRect)layerRect 

該方法可讓您輕鬆轉換圖層的可見CGRect到實際的攝像頭輸出。

一個警告:物理照相機沒有「正面向上」安裝,而是順時針旋轉90度。 (所以如果你拿着你的iPhone - 主頁按鈕的權利,相機實際上正面朝上)。

記住這一點,你必須轉換CGRect上面的方法給你,裁剪圖像到屏幕上。

例子:

CGRect visibleLayerFrame = THE ACTUAL VISIBLE AREA IN THE LAYER FRAME 
CGRect metaRect = [self.previewView.layer metadataOutputRectOfInterestForRect:visibleLayerFrame]; 


CGSize originalSize = [originalImage size]; 

if (UIInterfaceOrientationIsPortrait(_snapInterfaceOrientation)) { 
    // For portrait images, swap the size of the image, because 
    // here the output image is actually rotated relative to what you see on screen. 

    CGFloat temp = originalSize.width; 
    originalSize.width = originalSize.height; 
    originalSize.height = temp; 
} 


// metaRect is fractional, that's why we multiply here 

CGRect cropRect; 

cropRect.origin.x = metaRect.origin.x * originalSize.width; 
cropRect.origin.y = metaRect.origin.y * originalSize.height; 
cropRect.size.width = metaRect.size.width * originalSize.width; 
cropRect.size.height = metaRect.size.height * originalSize.height; 

cropRect = CGRectIntegral(cropRect); 

這可能是一個有點混亂,但是是什麼讓我真正瞭解它是這樣的:

握住你的設備「主頁按鈕右」 - >您將看到X - 軸實際上位於iPhone的「高度」上,而y軸位於iPhone的「寬度」上。這就是爲什麼肖像圖像,你必須調整大小;)

+0

@ user3117509這個答案應有的點,請接受吧! – Daniel

+1

和@Cabus你不旋轉圖像。我用下面的代碼做到了這一點 'UIImage * croppedImage = [UIImage imageWithCGImage:imageRef]; UIGraphicsBeginImageContext(croppedImage.size);我們可以通過下面的例子來說明如何使用裁剪圖像來裁剪圖像:裁剪圖像CGImage:縮放:1.0方向:UIImageOrientationRight] drawInRect:CGRectMake(0,0,croppedImage.size.height,croppedImage.size.width)]; UIImage * rotatedCroppedImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext();' – Daniel

6

@Cabus有一個解決方案的作品,你應該投票他的答案。不過,我做了我自己的版本的雨燕有以下幾點:

// The image returned in initialImageData will be larger than what 
// is shown in the AVCaptureVideoPreviewLayer, so we need to crop it. 
let image : UIImage = UIImage(data: initialImageData)! 

let originalSize : CGSize 
let visibleLayerFrame = self.previewView!.bounds // THE ACTUAL VISIBLE AREA IN THE LAYER FRAME 

// Calculate the fractional size that is shown in the preview 
let metaRect : CGRect = (self.videoPreviewLayer?.metadataOutputRectOfInterestForRect(visibleLayerFrame))! 
if (image.imageOrientation == UIImageOrientation.Left || image.imageOrientation == UIImageOrientation.Right) { 
    // For these images (which are portrait), swap the size of the 
    // image, because here the output image is actually rotated 
    // relative to what you see on screen. 
    originalSize = CGSize(width: image.size.height, height: image.size.width) 
} 
else { 
    originalSize = image.size 
} 

// metaRect is fractional, that's why we multiply here. 
let cropRect : CGRect = CGRectIntegral(
     CGRect(x: metaRect.origin.x * originalSize.width, 
       y: metaRect.origin.y * originalSize.height, 
       width: metaRect.size.width * originalSize.width, 
       height: metaRect.size.height * originalSize.height)) 

let finalImage : UIImage = 
    UIImage(CGImage: CGImageCreateWithImageInRect(image.CGImage, cropRect)!, 
     scale:1, 
     orientation: image.imageOrientation) 
0

這裏的@Erik艾倫的答案斯威夫特3

let originalSize: CGSize 
let visibleLayerFrame = self?.photoView.bounds 

// Calculate the fractional size that is shown in the preview 
let metaRect = (self?.videoPreviewLayer?.metadataOutputRectOfInterest(for: visibleLayerFrame ?? CGRect.zero)) ?? CGRect.zero 

if (image.imageOrientation == UIImageOrientation.left || image.imageOrientation == UIImageOrientation.right) { 
    // For these images (which are portrait), swap the size of the 
    // image, because here the output image is actually rotated 
    // relative to what you see on screen. 
    originalSize = CGSize(width: image.size.height, height: image.size.width) 
} else { 
    originalSize = image.size 
} 

let cropRect: CGRect = CGRect(x: metaRect.origin.x * originalSize.width, y: metaRect.origin.y * originalSize.height, width: metaRect.size.width * originalSize.width, height: metaRect.size.height * originalSize.height).integral 

if let finalCgImage = image.cgImage?.cropping(to: cropRect) { 
    let finalImage = UIImage(cgImage: finalCgImage, scale: 1.0, orientation: image.imageOrientation) 

    // User your image... 
}