您好,我對swift和avfoundation框架比較陌生。我目前正試圖實現一個使用圖像處理的自定義玩具相機應用程序。我的問題是,我正在嘗試使用圖像處理算法自動處理每個幀或每個X幀,然後在預覽中應用圖層,並允許用戶使用IBAction UI按鈕捕獲預覽圖像,並將該圖像輸出到屏幕。圖像處理步驟將捕獲不輸出到屏幕的較低分辨率圖像,而ibaction捕獲應捕獲標準jpeg並將其輸出到屏幕。swift avfoundation AVCapturePhotoCaptureDelegate捕獲方法
我的問題是什麼是最好的方式來實現這樣的事情?
我已經注意到,邏輯可能會進入AVCapturePhotoCaptureDelegate方法capture(_ captureOutput: AVCapturePhotoOutput, ...)
但也將打電話capturePhoto(with: photoSettings, delegate: self)
調用capture(_ captureOutput: AVCapturePhotoOutput, ...)
,從我的理解兩種不同AVCapturePhotoOutput()對象。我會在capture(_ captureOutput: AVCapturePhotoOutput, ...)
方法中檢查哪個captureOutput對象是由不同的AVCapturePhotoOutput()對象調用的capturePhoto(with:photoSettings,delegate:self)方法發送的?
其中capture(_ captureOutput: AVCapturePhotoOutput, ...)
方法的簽名是
capture(_ captureOutput: AVCapturePhotoOutput,
didFinishProcessingPhotoSampleBuffer photoSampleBuffer: CMSampleBuffer?,previewPhotoSampleBuffer: CMSampleBuffer?,
resolvedSettings: AVCaptureResolvedPhotoSettings,
bracketSettings: AVCaptureBracketedStillImageSettings?,
error: Error?)
還是會這兩個不同的捕捉功能可在兩個不同的線程做了什麼?
爲了進一步闡述,我有兩個工作的相機應用程序,這兩個不同的功能。一個具有圖像處理和圖層實現,另一個具有預覽捕捉和輸出到UI視圖。相應AVCapturePhotoCaptureDelegate捕獲方法如下:
圖像處理相機:
public func capture(_ captureOutput: AVCapturePhotoOutput,
didFinishProcessingPhotoSampleBuffer photoSampleBuffer: CMSampleBuffer?,
previewPhotoSampleBuffer: CMSampleBuffer?,
resolvedSettings: AVCaptureResolvedPhotoSettings,
bracketSettings: AVCaptureBracketedStillImageSettings?,
error: Error?) {
var imageTexture: MTLTexture?
var previewImage: UIImage?
if error == nil {
imageTexture = convertToMTLTexture(sampleBuffer: photoSampleBuffer)
previewImage = convertToUIImage(sampleBuffer: previewPhotoSampleBuffer)
}
delegate?.videoCapture(self, didCapturePhotoTexture: imageTexture, previewImage: previewImage)
}
捕獲圖像UI照相機:
func capture(_ captureOutput: AVCapturePhotoOutput,
didFinishProcessingPhotoSampleBuffer photoSampleBuffer: CMSampleBuffer?,previewPhotoSampleBuffer: CMSampleBuffer?,
resolvedSettings: AVCaptureResolvedPhotoSettings,
bracketSettings: AVCaptureBracketedStillImageSettings?,
error: Error?) {
if let error = error {
print("Error capturing photo: \(error)")
} else {
if let sampleBuffer = photoSampleBuffer,
let previewBuffer = previewPhotoSampleBuffer,
let dataImage = AVCapturePhotoOutput.jpegPhotoDataRepresentation(forJPEGSampleBuffer: sampleBuffer, previewPhotoSampleBuffer: previewBuffer) {
if let image = UIImage(data: dataImage) {
self.capturedImage.image = image
}
}
}
}
哪裏capturedImage是@IBOutlet weak var capturedImage: UIImageView!
和轉換方法是一個自定義的內功能類。我將如何將這兩個捕獲(...)方法的功能集成到一個應用程序中?
這是更好地完成這個方法的名字'捕捉(_ captureOutput:AVCapturePhotoOutput,...)',它是捕捉FUNC(_ captureOutput:AVCapturePhotoOutput,didFinishProcessingPhotoSampleBuffer photoSampleBuffer:CMSampleBuffer?previewPhotoSampleBuffer:CMSampleBuffer?resolvedSettings :AVCaptureResolvedPhotoSettings,bracketSettings:AVCaptureBracketedStillImageSettings ?,錯誤:錯誤?) – dengApro
正確。這是完整方法的名稱。 – stochasticcrap