2014-02-20 53 views
1

我正在開發照片拍攝應用程序。該應用程序的預覽層設置使用此代碼佔用屏幕的正好一半:使用CALayer時圖像捕捉失真

[_previewLayer setFrame:CGRectMake(0, 0, rootLayer.bounds.size.width, rootLayer.bounds.size.height/2)]; 

這看起來很完美,沒有失真可言,而用戶正在查看相機的「預覽」 /他們所看到同時拍照。

但是,一旦他們實際拍攝照片,我創建一個子圖層並將其幀屬性設置爲我的預覽圖層的屬性,並將照片設置爲子圖層的內容。

這確實在技術上有效。一旦用戶拍攝照片,照片就會像屏幕一樣出現在屏幕的上半部分。

唯一的問題是照片失真。

它看起來很stret,,好像我在拍風景照。

任何幫助,非常感謝,我完全絕望在此,並沒有能夠修復它今天整天工作後。

這裏是我的所有視圖控制器的代碼:

#import "MediaCaptureVC.h" 

@interface MediaCaptureVC() 

@end 

@implementation MediaCaptureVC 

- (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil 
{ 
    self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil]; 
    if (self) { 
     // Custom initialization 
    } 
    return self; 
} 

- (void)viewDidLoad 
{ 

    [super viewDidLoad]; 
    // Do any additional setup after loading the view. 


    AVCaptureSession *session =[[AVCaptureSession alloc]init]; 


    [session setSessionPreset:AVCaptureSessionPresetPhoto]; 


    AVCaptureDevice *inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; 


    NSError *error = [[NSError alloc]init]; 

    AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:inputDevice error:&error]; 


    if([session canAddInput:deviceInput]) 
     [session addInput:deviceInput]; 


    _previewLayer = [[AVCaptureVideoPreviewLayer alloc]initWithSession:session]; 


    [_previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill]; 

    CALayer *rootLayer = [[self view]layer]; 

    [rootLayer setMasksToBounds:YES]; 


    [_previewLayer setFrame:CGRectMake(0, 0, rootLayer.bounds.size.width, rootLayer.bounds.size.height/2)]; 


    [rootLayer insertSublayer:_previewLayer atIndex:0]; 


    _stillImageOutput = [[AVCaptureStillImageOutput alloc] init]; 

    [session addOutput:_stillImageOutput]; 

    [session startRunning]; 


    } 

- (void)didReceiveMemoryWarning 
{ 
    [super didReceiveMemoryWarning]; 
    // Dispose of any resources that can be recreated. 
} 


-(UIImage*) rotate:(UIImage*) src andOrientation:(UIImageOrientation)orientation 
{ 
    UIGraphicsBeginImageContext(src.size); 

    CGContextRef context=(UIGraphicsGetCurrentContext()); 

    if (orientation == UIImageOrientationRight) { 
     CGContextRotateCTM (context, 90/180*M_PI) ; 
    } else if (orientation == UIImageOrientationLeft) { 
     CGContextRotateCTM (context, -90/180*M_PI); 
    } else if (orientation == UIImageOrientationDown) { 
     // NOTHING 
    } else if (orientation == UIImageOrientationUp) { 
     CGContextRotateCTM (context, 90/180*M_PI); 
    } 

    [src drawAtPoint:CGPointMake(0, 0)]; 
    UIImage *img=UIGraphicsGetImageFromCurrentImageContext(); 
    UIGraphicsEndImageContext(); 
    return img; 

} 



-(IBAction)stillImageCapture { 

    AVCaptureConnection *videoConnection = nil; 
    for (AVCaptureConnection *connection in _stillImageOutput.connections){ 
     for (AVCaptureInputPort *port in [connection inputPorts]){ 

      if ([[port mediaType] isEqual:AVMediaTypeVideo]){ 

       videoConnection = connection; 
       break; 
      } 
     } 
     if (videoConnection) { 
      break; 
     } 
    } 

    NSLog(@"about to request a capture from: %@", _stillImageOutput); 

[_stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) { 


if(imageDataSampleBuffer) { 

      NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer]; 


      UIImage *image = [[UIImage alloc]initWithData:imageData]; 


     image = [self rotate:image andOrientation:image.imageOrientation]; 


      CALayer *subLayer = [CALayer layer]; 

      CGImageRef imageRef = image.CGImage; 

    subLayer.contents = (id)[UIImage imageWithCGImage:imageRef].CGImage; 

      subLayer.frame = _previewLayer.frame; 

      CALayer *rootLayer = [[self view]layer]; 

      [rootLayer setMasksToBounds:YES]; 

      [subLayer setFrame:CGRectMake(0, 0, rootLayer.bounds.size.width, rootLayer.bounds.size.height/2)]; 

      [_previewLayer addSublayer:subLayer]; 


      NSLog(@"%@", subLayer.contents); 

      NSLog(@"Orientation: %d", image.imageOrientation); 

     } 

    }]; 

} 

@end 
+0

您可以設置您的imageview的內容模式。 – Manthan

回答

0

您好我希望這可以幫助你 -

的代碼似乎更復雜比它應該是,因爲大部分代碼是對的CALayer完成級別而不是imageView /視圖級別,但我認爲問題是,從原始捕獲到您的迷你視口的幀的比例是不同的,這是扭曲的UIImage在此聲明中:

[subLayer setFrame:CGRectMake(0, 0, rootLayer.bounds.size.width, rootLayer.bounds.size.height/2)]; 

需要做什麼是捕獲sublayer.frame的比例,並獲得最適合的底層或與其相關的圖像視圖的大小

我有一些代碼之前這樣做:編碼一個子例程處理該比例之前(請注意,您將需要調整座標系的原點,以得到你想要的!)

... 的CGRect newbounds = [自我figure_proportion:圖像to_fit_rect(rootLayer.frame) 如果(newbounds .size.height < rootLayer.frame.size.height){ rootLayer .....(調整圖像視圖幀的原點的代碼)

-(CGRect) figure_proportion:(UIImage *) image2 to_fit_rect:(CGRect) rect { 
    CGSize image_size = image2.size; 
     CGRect newrect = rect; 
    float wfactor = image_size.width/ image_size.height; 
    float hfactor = image_size.height/ image_size.width; 

    if (image2.size.width > image2.size.height) { 
     newrect.size.width = rect.size.width; 
     newrect.size.height = (rect.size.width * hfactor); 
    } 
    else if (image2.size.height > image2.size.width) { 
     newrect.size.height = rect.size.height; 
     newrect.size.width = (rect.size.height * wfactor); 
    } 
    else { 
     newrect.size.width = rect.size.width; 
     newrect.size.height = newrect.size.width; 
    } 
    if (newrect.size.height > rect.size.height) { 
     newrect.size.height = rect.size.height; 
     newrect.size.width = (newrect.size.height* wfactor); 
    } 
    if (newrect.size.width > rect.size.width) { 
     newrect.size.width = rect.size.width; 
     newrect.size.height = (newrect.size.width* hfactor); 
    } 
    return(newrect); 


    }