2016-07-22 27 views
0

我玩過AVFoundation嘗試將濾鏡應用於實況視頻。我試圖將過濾器應用於AVCaptureVideoDataOutput,但輸出僅佔用了視圖的1/4。AVFoundation的圖像輸出僅佔屏幕的1/4 1/4

enter image description here

下面是我的一些相關的代碼

捕捉

let availableCameraDevices = AVCaptureDevice.devicesWithMediaType(AVMediaTypeVideo) 
    for device in availableCameraDevices as! [AVCaptureDevice] { 
     if device.position == .Back { 
      backCameraDevice = device 
     } else if device.position == .Front { 
      frontCameraDevice = device 
     } 
    } 

配置輸出

private func configureVideoOutput() { 
    videoOutput = AVCaptureVideoDataOutput() 
    videoOutput?.setSampleBufferDelegate(self, queue: dispatch_queue_create("sample buffer delegate", DISPATCH_QUEUE_SERIAL)) 
    if session.canAddOutput(videoOutput) { 
     session.addOutput(videoOutput) 
    } 
} 

獲取圖像

func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: 
    CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) { 
    // Grab the pixelbuffer 
    let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)! 

    // create a CIImage from it, rotate it, and zero the origin 
    var image = CIImage(CVPixelBuffer: pixelBuffer) 
    image = image.imageByApplyingTransform(CGAffineTransformMakeRotation(CGFloat(-M_PI_2))) 
    let origin = image.extent.origin 
    image = image.imageByApplyingTransform(CGAffineTransformMakeTranslation(-origin.x, -origin.y)) 


    self.manualDelegate?.cameraController(self, didOutputImage: image) 
} 

渲染

func cameraController(cameraController: CameraController, didOutputImage image: CIImage) { 
    if glContext != EAGLContext.currentContext() { 
     EAGLContext.setCurrentContext(glContext) 
    } 

    let filteredImage = image.imageByApplyingFilter("CIColorControls", withInputParameters: [kCIInputSaturationKey: 0.0]) 

    var rect = view.bounds 

    glView.bindDrawable() 
    ciContext.drawImage(filteredImage, inRect: rect, fromRect: image.extent) 
    glView.display() 
} 

我期待的Retina顯示屏和比例因子造成這一點,但不知道我應該在哪裏處理這個。我已經將內容比例因子設置爲GLKView,但沒有運氣。

private var glView: GLKView { 
    // Set in storyboard 
    return view as! GLKView 
} 

glView.contentScaleFactor = glView.bounds.size.width/UIScreen.mainScreen().bounds.size.width * UIScreen.mainScreen().scale 

回答

0

在開始捕獲之前,您設置了哪種格式?你確定視頻預覽圖層正在填滿整個屏幕嗎? 你有2種方式的avcapture過程中設置的分辨率:

  • 選擇AVCaptureDeviceFormat的最高分辨率,通過查找通過現有的採集格式
  • 使用你的sessionPreset財產捕獲會話。 Doc here
+0

我沒有使用'AVCaptureVideoPreviewLayer'在這種情況下,但OpenGL的。爲了捕捉我編輯該部分的帖子。 – sarunw

1

你的問題是在drawImage功能使用的輸出rect

ciContext.drawImage(filteredImage, inRect: rect, fromRect: image.extent) 

圖像的程度是實際像素,而視圖的邊界是點,不是由contentScaleFactor調整,以獲得像素。您的設備無疑具有2.0的contentScaleFactor,因此它的尺寸是每個尺寸的1/2。

相反,設置矩形爲:

var rect = CGRect(x: 0, y: 0, width: glView.drawableWidth, 
          height: glView.drawableHeight) 

drawableWidthdrawableHeight返回尺寸以像素爲單位,佔contentScaleFactor。請參閱: https://developer.apple.com/reference/glkit/glkview/1615591-drawablewidth

而且,也沒有必要設置glView的contentScaleFactor

+0

這是正確的答案。我遇到了同樣的問題,通過在此答案中應用建議的更改而得到解決。謝謝。 – Mikrasya