我正在開發iPhone應用程序。在那裏,有一個要求暫停和恢復相機。所以我用AVFoundation
代替UIImagePickerController
。在錄製視頻時更換相機捕捉設備
我的代碼是:
- (void) startup :(BOOL)isFrontCamera
{
if (_session == nil)
{
NSLog(@"Starting up server");
self.isCapturing = NO;
self.isPaused = NO;
_currentFile = 0;
_discont = NO;
// create capture device with video input
_session = [[AVCaptureSession alloc] init];
AVCaptureDevice *cameraDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if(isFrontCamera)
{
NSArray *videoDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
AVCaptureDevice *captureDevice = nil;
for (AVCaptureDevice *device in videoDevices)
{
if (device.position == AVCaptureDevicePositionFront)
{
captureDevice = device;
break;
}
}
cameraDevice = captureDevice;
}
cameraDevice=[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:cameraDevice error:nil];
[_session addInput:input];
// audio input from default mic
AVCaptureDevice* mic = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
AVCaptureDeviceInput* micinput = [AVCaptureDeviceInput deviceInputWithDevice:mic error:nil];
[_session addInput:micinput];
// create an output for YUV output with self as delegate
_captureQueue = dispatch_queue_create("uk.co.gdcl.cameraengine.capture", DISPATCH_QUEUE_SERIAL);
AVCaptureVideoDataOutput* videoout = [[AVCaptureVideoDataOutput alloc] init];
[videoout setSampleBufferDelegate:self queue:_captureQueue];
NSDictionary* setcapSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange], kCVPixelBufferPixelFormatTypeKey,
nil];
videoout.videoSettings = setcapSettings;
[_session addOutput:videoout];
_videoConnection = [videoout connectionWithMediaType:AVMediaTypeVideo];
[_videoConnection setVideoOrientation:AVCaptureVideoOrientationPortrait];
NSDictionary* actual = videoout.videoSettings;
_cy = [[actual objectForKey:@"Width"] integerValue];
_cx = [[actual objectForKey:@"Height"] integerValue];
AVCaptureAudioDataOutput* audioout = [[AVCaptureAudioDataOutput alloc] init];
[audioout setSampleBufferDelegate:self queue:_captureQueue];
[_session addOutput:audioout];
_audioConnection = [audioout connectionWithMediaType:AVMediaTypeAudio];
[_session startRunning];
_preview = [AVCaptureVideoPreviewLayer layerWithSession:_session];
_preview.videoGravity = AVLayerVideoGravityResizeAspectFill;
}
}
我在這裏所面臨的問題,當我將相機切換到前臺。當我通過將相機更改爲前面來調用上述方法時,預覽圖層會卡住,無法預覽。我的疑問是「我們可以在拍攝會議中改變捕捉設備嗎?」。請指導我在哪裏出錯(或)建議我如何在錄製時在前後相機之間導航的解決方案。
在此先感謝。
這個算法是殺手,我唯一的問題是視頻和音頻不同步。是否有任何要求同時配置音頻和視頻? – HighFlyingFantasy
@HighFlyingFantasy絕對有一種方法。它有點複雜。您將不得不手動操作音頻採樣緩衝區的時間信息以匹配視頻。由於我們正在重新創建audioOutput,因此其時間信息每次都會從零開始。在將其寫入文件之前,必須跟蹤錄製音頻的時間並調整audioSampleBuffer的值。 Geraint Davies通過他暫停和恢復視頻的例子實施了這項技術。兩者的混合應該爲你工作。 http://www.gdcl.co.uk/2013/02/20/iPhone-Pause.html – hatebyte