2016-06-30 51 views
9

我想將自定義相機實現到我的應用中。所以,我使用AVCaptureDevice來創建這款相機。在iOS中輸出AVCaptureDevice時設置GrayScale

現在我想只顯示灰色輸出到我的自定義相機。所以我試圖使用setWhiteBalanceModeLockedWithDeviceWhiteBalanceGains:AVCaptureWhiteBalanceGains。我正在使用AVCamManual: Extending AVCam to Use Manual Capture

- (void)setWhiteBalanceGains:(AVCaptureWhiteBalanceGains)gains 
{ 
    NSError *error = nil; 

    if ([videoDevice lockForConfiguration:&error]) { 
     AVCaptureWhiteBalanceGains normalizedGains = [self normalizedGains:gains]; // Conversion can yield out-of-bound values, cap to limits 
     [videoDevice setWhiteBalanceModeLockedWithDeviceWhiteBalanceGains:normalizedGains completionHandler:nil]; 
     [videoDevice unlockForConfiguration]; 
    } 
    else { 
     NSLog(@"Could not lock device for configuration: %@", error); 
    } 
} 

但對於這一點,我必須通過 1 4之間所以我創建檢查最大值和最小值這種方法RGB增益值。

- (AVCaptureWhiteBalanceGains)normalizedGains:(AVCaptureWhiteBalanceGains) gains 
{ 
    AVCaptureWhiteBalanceGains g = gains; 

    g.redGain = MAX(1.0, g.redGain); 
    g.greenGain = MAX(1.0, g.greenGain); 
    g.blueGain = MAX(1.0, g.blueGain); 

    g.redGain = MIN(videoDevice.maxWhiteBalanceGain, g.redGain); 
    g.greenGain = MIN(videoDevice.maxWhiteBalanceGain, g.greenGain); 
    g.blueGain = MIN(videoDevice.maxWhiteBalanceGain, g.blueGain); 

    return g; 
} 

此外,我想獲得不同的效果,如傳遞RGB增益靜態值。

- (AVCaptureWhiteBalanceGains)normalizedGains:(AVCaptureWhiteBalanceGains) gains 
{ 
    AVCaptureWhiteBalanceGains g = gains; 
    g.redGain = 3; 
    g.greenGain = 2; 
    g.blueGain = 1; 
    return g; 
} 

現在,我想設置此灰階:我自定義相機(式像素= 0.30078125f * R + 0.5859375f * G + 0.11328125f * B)。我已經嘗試過這個公式。

- (AVCaptureWhiteBalanceGains)normalizedGains:(AVCaptureWhiteBalanceGains) gains 
{ 
    AVCaptureWhiteBalanceGains g = gains; 

    g.redGain = g.redGain * 0.30078125; 
    g.greenGain = g.greenGain * 0.5859375; 
    g.blueGain = g.blueGain * 0.11328125; 

    float grayScale = g.redGain + g.greenGain + g.blueGain; 

    g.redGain = MAX(1.0, grayScale); 
    g.greenGain = MAX(1.0, grayScale); 
    g.blueGain = MAX(1.0, grayScale); 

    g.redGain = MIN(videoDevice.maxWhiteBalanceGain, g.redGain); 
    g.greenGain = MIN(videoDevice.maxWhiteBalanceGain, g.greenGain); 
    g.blueGain = MIN(videoDevice.maxWhiteBalanceGain, g.blueGain); 

    return g; 
} 

所以如何我通過在1 4之間這個值..?

有什麼方法或比例來比較這些東西..?

任何幫助,將不勝感激。

+2

調整白平衡不會將彩色圖像轉換爲黑白圖像。你需要找到一個不同的API才能做到這一點。例如[vImageMatrixMultiply_ARGB8888](https://developer.apple.com/library/ios/documentation/Performance/Reference/vImage_transform/index.html#//apple_ref/c/func/vImageMatrixMultiply_ARGB8888) – Mats

+0

@Mats:是的,謝謝.. !請提供任何示例代碼以獲得更好的理解。 –

+1

也許這個,http://stackoverflow.com/questions/21207099/,問題幫助。 – Mats

回答

5

CoreImage提供了一系列用於使用GPU調整圖像的過濾器,並且可以通過攝像頭饋送或視頻文件有效地使用視頻數據。

有一篇文章objc.io顯示如何做到這一點。這些例子在Objective-C中,但是解釋應該足夠清楚。

的基本步驟是:

  1. 創建EAGLContext,配置爲使用OpenGLES2。
  2. 使用EAGLContext創建一個GLKView以顯示呈現的輸出。
  3. 創建一個CIContext,使用相同的EAGLContext
  4. 使用CIColorMonochromeCoreImage filter創建CIFilter
  5. AVCaptureVideoDataOutput創建一個AVCaptureSession
  6. AVCaptureVideoDataOutputDelegate方法中,將CMSampleBuffer轉換爲CIImage。將CIFilter應用於圖像。將已過濾的圖像繪製到CIImageContext

該流水線確保視頻像素緩衝區保持在GPU上(從攝像機到顯示器),並避免將數據移動到CPU,以保持實時性能。

要保存已過濾的視頻,請執行AVAssetWriter,並將採樣緩衝區追加到完成過濾的相同AVCaptureVideoDataOutputDelegate中。

這是Swift中的一個例子。

Example on GitHub

import UIKit 
import GLKit 
import AVFoundation 

private let rotationTransform = CGAffineTransformMakeRotation(CGFloat(-M_PI * 0.5)) 

class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate { 

    private var context: CIContext! 
    private var targetRect: CGRect! 
    private var session: AVCaptureSession! 
    private var filter: CIFilter! 

    @IBOutlet var glView: GLKView! 

    override func prefersStatusBarHidden() -> Bool { 
     return true 
    } 

    override func viewDidAppear(animated: Bool) { 
     super.viewDidAppear(animated) 

     let whiteColor = CIColor(
      red: 1.0, 
      green: 1.0, 
      blue: 1.0 
     ) 

     filter = CIFilter(
      name: "CIColorMonochrome", 
      withInputParameters: [ 
       "inputColor" : whiteColor, 
       "inputIntensity" : 1.0 
      ] 
     ) 

     // GL context 

     let glContext = EAGLContext(
      API: .OpenGLES2 
     ) 

     glView.context = glContext 
     glView.enableSetNeedsDisplay = false 

     context = CIContext(
      EAGLContext: glContext, 
      options: [ 
       kCIContextOutputColorSpace: NSNull(), 
       kCIContextWorkingColorSpace: NSNull(), 
      ] 
     ) 

     let screenSize = UIScreen.mainScreen().bounds.size 
     let screenScale = UIScreen.mainScreen().scale 

     targetRect = CGRect(
      x: 0, 
      y: 0, 
      width: screenSize.width * screenScale, 
      height: screenSize.height * screenScale 
     ) 

     // Setup capture session. 

     let cameraDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo) 

     let videoInput = try? AVCaptureDeviceInput(
      device: cameraDevice 
     ) 

     let videoOutput = AVCaptureVideoDataOutput() 
     videoOutput.setSampleBufferDelegate(self, queue: dispatch_get_main_queue()) 

     session = AVCaptureSession() 
     session.beginConfiguration() 
     session.addInput(videoInput) 
     session.addOutput(videoOutput) 
     session.commitConfiguration() 
     session.startRunning() 
    } 

    func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) { 

     guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { 
      return 
     } 

     let originalImage = CIImage(
      CVPixelBuffer: pixelBuffer, 
      options: [ 
       kCIImageColorSpace: NSNull() 
      ] 
     ) 

     let rotatedImage = originalImage.imageByApplyingTransform(rotationTransform) 

     filter.setValue(rotatedImage, forKey: kCIInputImageKey) 

     guard let filteredImage = filter.outputImage else { 
      return 
     } 

     context.drawImage(filteredImage, inRect: targetRect, fromRect: filteredImage.extent) 

     glView.display() 
    } 

    func captureOutput(captureOutput: AVCaptureOutput!, didDropSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) { 
     let seconds = CMTimeGetSeconds(CMSampleBufferGetPresentationTimeStamp(sampleBuffer)) 
     print("dropped sample buffer: \(seconds)") 
    } 
} 
+0

Yesss完美的解決方案。謝謝盧克。 :)我已經實現了這個到我的應用程序。但有時它會在'glView.display()'行崩潰。 –

+0

如何使用GLKView視圖捕捉圖像? –

+1

崩潰可能是由修改不同線程上的過濾器或上下文引起的。解決這個問題的一個安全的方法是在主線程上執行所有的工作(我已經更新了示例以顯示這一點)。只要注意不要使用資源密集型過濾器(例如模糊),或者在主線程上做太多額外的工作。在實踐中,您可能希望使用多個線程來避免阻塞主線程,儘管這是一個複雜的主題。如果感興趣的話,請查看蘋果的OpenGL文檔。 –