我正在使用此示例(https://github.com/google-ar/arcore-android-sdk/tree/master/samples/hello_ar_java),並且我想提供用於錄製放置AR對象的視頻的功能。使用ARCore提供視頻錄製功能
我試過很多東西但無濟於事,有沒有推薦的方法來做到這一點?
我正在使用此示例(https://github.com/google-ar/arcore-android-sdk/tree/master/samples/hello_ar_java),並且我想提供用於錄製放置AR對象的視頻的功能。使用ARCore提供視頻錄製功能
我試過很多東西但無濟於事,有沒有推薦的方法來做到這一點?
從OpenGL表面創建視頻有點涉及,但是可行。理解最簡單的方法是使用兩個EGL曲面,一個用於UI,另一個用於媒體編碼器。在GitHub的Grafika項目中需要EGL級調用的一個很好的例子。我以此爲出發點來找出對於ARCore的HelloAR示例所需的修改。由於有不少變化,我把它分解成幾個步驟。
進行更改,以支持寫入到外部存儲
保存視頻,你需要編寫視頻文件的地方訪問,所以你需要獲得此權限。
聲明允許在AndroidManifest.xml
文件:
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
然後改變CameraPermissionHelper.java
到請求外部存儲許可以及相機許可。要做到這一點,使權限的數組,並使用請求的權限時,並遍歷它檢查許可狀態時:
private static final String REQUIRED_PERMISSIONS[] = {
Manifest.permission.CAMERA,
Manifest.permission.WRITE_EXTERNAL_STORAGE
};
public static void requestCameraPermission(Activity activity) {
ActivityCompat.requestPermissions(activity, REQUIRED_PERMISSIONS,
CAMERA_PERMISSION_CODE);
}
public static boolean hasCameraPermission(Activity activity) {
for(String p : REQUIRED_PERMISSIONS) {
if (ContextCompat.checkSelfPermission(activity, p) !=
PackageManager.PERMISSION_GRANTED) {
return false;
}
}
return true;
}
public static boolean shouldShowRequestPermissionRationale(Activity activity) {
for(String p : REQUIRED_PERMISSIONS) {
if (ActivityCompat.shouldShowRequestPermissionRationale(activity, p)) {
return true;
}
}
return false;
}
添加錄製到HelloARActivity
添加一個簡單的按鈕和文本視圖到UI在activity_main.xml
底部:
<Button
android:id="@+id/fboRecord_button"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignStart="@+id/surfaceview"
android:layout_alignTop="@+id/surfaceview"
android:onClick="clickToggleRecording"
android:text="@string/toggleRecordingOn"
tools:ignore="OnClick"/>
<TextView
android:id="@+id/nowRecording_text"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignBaseline="@+id/fboRecord_button"
android:layout_alignBottom="@+id/fboRecord_button"
android:layout_toEndOf="@+id/fboRecord_button"
android:text="" />
在HelloARActivity
用於記錄附加成員變量:
private VideoRecorder mRecorder;
private android.opengl.EGLConfig mAndroidEGLConfig;
在onSurfaceCreated()
中初始化mAndroidEGLConfig。我們將使用這個配置對象來創建編碼器表面。
EGL10 egl10 = (EGL10)EGLContext.getEGL();
javax.microedition.khronos.egl.EGLDisplay display = egl10.eglGetCurrentDisplay();
int v[] = new int[2];
egl10.eglGetConfigAttrib(display,config, EGL10.EGL_CONFIG_ID, v);
EGLDisplay androidDisplay = EGL14.eglGetCurrentDisplay();
int attribs[] = {EGL14.EGL_CONFIG_ID, v[0], EGL14.EGL_NONE};
android.opengl.EGLConfig myConfig[] = new android.opengl.EGLConfig[1];
EGL14.eglChooseConfig(androidDisplay, attribs, 0, myConfig, 0, 1, v, 1);
this.mAndroidEGLConfig = myConfig[0];
重構的onDrawFrame()
方法,以便所有的非繪圖代碼首先被執行,和實際的繪製在名爲draw()
方法完成。在錄製過程中,我們可以更新ARCore框架,處理輸入,然後繪製到UI,然後再次繪製到編碼器。
@Override
public void onDrawFrame(GL10 gl) {
if (mSession == null) {
return;
}
// Notify ARCore session that the view size changed so that
// the perspective matrix and
// the video background can be properly adjusted.
mDisplayRotationHelper.updateSessionIfNeeded(mSession);
try {
// Obtain the current frame from ARSession. When the
//configuration is set to
// UpdateMode.BLOCKING (it is by default), this will
// throttle the rendering to the camera framerate.
Frame frame = mSession.update();
Camera camera = frame.getCamera();
// Handle taps. Handling only one tap per frame, as taps are
// usually low frequency compared to frame rate.
MotionEvent tap = mQueuedSingleTaps.poll();
if (tap != null && camera.getTrackingState() == TrackingState.TRACKING) {
for (HitResult hit : frame.hitTest(tap)) {
// Check if any plane was hit, and if it was hit inside the plane polygon
Trackable trackable = hit.getTrackable();
if (trackable instanceof Plane
&& ((Plane) trackable).isPoseInPolygon(hit.getHitPose())) {
// Cap the number of objects created. This avoids overloading both the
// rendering system and ARCore.
if (mAnchors.size() >= 20) {
mAnchors.get(0).detach();
mAnchors.remove(0);
}
// Adding an Anchor tells ARCore that it should track this position in
// space. This anchor is created on the Plane to place the 3d model
// in the correct position relative both to the world and to the plane.
mAnchors.add(hit.createAnchor());
// Hits are sorted by depth. Consider only closest hit on a plane.
break;
}
}
}
// Get projection matrix.
float[] projmtx = new float[16];
camera.getProjectionMatrix(projmtx, 0, 0.1f, 100.0f);
// Get camera matrix and draw.
float[] viewmtx = new float[16];
camera.getViewMatrix(viewmtx, 0);
// Compute lighting from average intensity of the image.
final float lightIntensity = frame.getLightEstimate().getPixelIntensity();
// Visualize tracked points.
PointCloud pointCloud = frame.acquirePointCloud();
mPointCloud.update(pointCloud);
draw(frame,camera.getTrackingState() == TrackingState.PAUSED,
viewmtx, projmtx, camera.getDisplayOrientedPose(),lightIntensity);
if (mRecorder!= null && mRecorder.isRecording()) {
VideoRecorder.CaptureContext ctx = mRecorder.startCapture();
if (ctx != null) {
// draw again
draw(frame, camera.getTrackingState() == TrackingState.PAUSED,
viewmtx, projmtx, camera.getDisplayOrientedPose(), lightIntensity);
// restore the context
mRecorder.stopCapture(ctx, frame.getTimestamp());
}
}
// Application is responsible for releasing the point cloud resources after
// using it.
pointCloud.release();
// Check if we detected at least one plane. If so, hide the loading message.
if (mMessageSnackbar != null) {
for (Plane plane : mSession.getAllTrackables(Plane.class)) {
if (plane.getType() ==
com.google.ar.core.Plane.Type.HORIZONTAL_UPWARD_FACING
&& plane.getTrackingState() == TrackingState.TRACKING) {
hideLoadingMessage();
break;
}
}
}
} catch (Throwable t) {
// Avoid crashing the application due to unhandled exceptions.
Log.e(TAG, "Exception on the OpenGL thread", t);
}
}
private void draw(Frame frame, boolean paused,
float[] viewMatrix, float[] projectionMatrix,
Pose displayOrientedPose, float lightIntensity) {
// Clear screen to notify driver it should not load
// any pixels from previous frame.
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
// Draw background.
mBackgroundRenderer.draw(frame);
// If not tracking, don't draw 3d objects.
if (paused) {
return;
}
mPointCloud.draw(viewMatrix, projectionMatrix);
// Visualize planes.
mPlaneRenderer.drawPlanes(
mSession.getAllTrackables(Plane.class),
displayOrientedPose, projectionMatrix);
// Visualize anchors created by touch.
float scaleFactor = 1.0f;
for (Anchor anchor : mAnchors) {
if (anchor.getTrackingState() != TrackingState.TRACKING) {
continue;
}
// Get the current pose of an Anchor in world space.
// The Anchor pose is
// updated during calls to session.update() as ARCore refines
// its estimate of the world.
anchor.getPose().toMatrix(mAnchorMatrix, 0);
// Update and draw the model and its shadow.
mVirtualObject.updateModelMatrix(mAnchorMatrix, scaleFactor);
mVirtualObjectShadow.updateModelMatrix(mAnchorMatrix, scaleFactor);
mVirtualObject.draw(viewMatrix, projectionMatrix, lightIntensity);
mVirtualObjectShadow.draw(viewMatrix, projectionMatrix, lightIntensity);
}
}
處理記錄的觸發:
public void clickToggleRecording(View view) {
Log.d(TAG, "clickToggleRecording");
if (mRecorder == null) {
File outputFile = new File(Environment.getExternalStoragePublicDirectory(
Environment.DIRECTORY_PICTURES) + "/HelloAR",
"fbo-gl-" + Long.toHexString(System.currentTimeMillis()) + ".mp4");
File dir = outputFile.getParentFile();
if (!dir.exists()) {
dir.mkdirs();
}
try {
mRecorder = new VideoRecorder(mSurfaceView.getWidth(),
mSurfaceView.getHeight(),
VideoRecorder.DEFAULT_BITRATE, outputFile, this);
mRecorder.setEglConfig(mAndroidEGLConfig);
} catch (IOException e) {
Log.e(TAG,"Exception starting recording", e);
}
}
mRecorder.toggleRecording();
updateControls();
}
private void updateControls() {
Button toggleRelease = findViewById(R.id.fboRecord_button);
int id = (mRecorder != null && mRecorder.isRecording()) ?
R.string.toggleRecordingOff : R.string.toggleRecordingOn;
toggleRelease.setText(id);
TextView tv = findViewById(R.id.nowRecording_text);
if (id == R.string.toggleRecordingOff) {
tv.setText(getString(R.string.nowRecording));
} else {
tv.setText("");
}
}
添加監聽接口,用於接收視頻記錄狀態的變化:
@Override
public void onVideoRecorderEvent(VideoRecorder.VideoEvent videoEvent) {
Log.d(TAG, "VideoEvent: " + videoEvent);
updateControls();
if (videoEvent == VideoRecorder.VideoEvent.RecordingStopped) {
mRecorder = null;
}
}
實施錄像機類圖像饋送到編碼器
The VideoRe corder類用於將圖像饋送到媒體編碼器。該類使用媒體編碼器的輸入表面創建一個屏幕EGLSurface。一般的做法是在錄製過程中爲UI顯示畫一次,然後對媒體編碼器表面進行同樣精確的繪製調用。
構造函數使用錄製參數和偵聽器在錄製過程中推送事件。
public VideoRecorder(int width, int height, int bitrate, File outputFile,
VideoRecorderListener listener) throws IOException {
this.listener = listener;
mEncoderCore = new VideoEncoderCore(width, height, bitrate, outputFile);
mVideoRect = new Rect(0,0,width,height);
}
當錄製開始時,我們需要爲編碼器創建一個新的EGL曲面。然後通知編碼器新框架可用,使編碼器表面成爲當前的EGL表面,然後返回,以便呼叫者可以進行繪圖調用。
public CaptureContext startCapture() {
if (mVideoEncoder == null) {
return null;
}
if (mEncoderContext == null) {
mEncoderContext = new CaptureContext();
mEncoderContext.windowDisplay = EGL14.eglGetCurrentDisplay();
// Create a window surface, and attach it to the Surface we received.
int[] surfaceAttribs = {
EGL14.EGL_NONE
};
mEncoderContext.windowDrawSurface = EGL14.eglCreateWindowSurface(
mEncoderContext.windowDisplay,
mEGLConfig,mEncoderCore.getInputSurface(),
surfaceAttribs, 0);
mEncoderContext.windowReadSurface = mEncoderContext.windowDrawSurface;
}
CaptureContext displayContext = new CaptureContext();
displayContext.initialize();
// Draw for recording, swap.
mVideoEncoder.frameAvailableSoon();
// Make the input surface current
// mInputWindowSurface.makeCurrent();
EGL14.eglMakeCurrent(mEncoderContext.windowDisplay,
mEncoderContext.windowDrawSurface, mEncoderContext.windowReadSurface,
EGL14.eglGetCurrentContext());
// If we don't set the scissor rect, the glClear() we use to draw the
// light-grey background will draw outside the viewport and muck up our
// letterboxing. Might be better if we disabled the test immediately after
// the glClear(). Of course, if we were clearing the frame background to
// black it wouldn't matter.
//
// We do still need to clear the pixels outside the scissor rect, of course,
// or we'll get garbage at the edges of the recording. We can either clear
// the whole thing and accept that there will be a lot of overdraw, or we
// can issue multiple scissor/clear calls. Some GPUs may have a special
// optimization for zeroing out the color buffer.
//
// For now, be lazy and zero the whole thing. At some point we need to
// examine the performance here.
GLES20.glClearColor(0f, 0f, 0f, 1f);
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glViewport(mVideoRect.left, mVideoRect.top,
mVideoRect.width(), mVideoRect.height());
GLES20.glEnable(GLES20.GL_SCISSOR_TEST);
GLES20.glScissor(mVideoRect.left, mVideoRect.top,
mVideoRect.width(), mVideoRect.height());
return displayContext;
}
當繪圖完成時,EGLContext需要恢復回UI表面:
public void stopCapture(CaptureContext oldContext, long timeStampNanos) {
if (oldContext == null) {
return;
}
GLES20.glDisable(GLES20.GL_SCISSOR_TEST);
EGLExt.eglPresentationTimeANDROID(mEncoderContext.windowDisplay,
mEncoderContext.windowDrawSurface, timeStampNanos);
EGL14.eglSwapBuffers(mEncoderContext.windowDisplay,
mEncoderContext.windowDrawSurface);
// Restore.
GLES20.glViewport(0, 0, oldContext.getWidth(), oldContext.getHeight());
EGL14.eglMakeCurrent(oldContext.windowDisplay,
oldContext.windowDrawSurface, oldContext.windowReadSurface,
EGL14.eglGetCurrentContext());
}
添加一些簿記方法
public boolean isRecording() {
return mRecording;
}
public void toggleRecording() {
if (isRecording()) {
stopRecording();
} else {
startRecording();
}
}
protected void startRecording() {
mRecording = true;
if (mVideoEncoder == null) {
mVideoEncoder = new TextureMovieEncoder2(mEncoderCore);
}
if (listener != null) {
listener.onVideoRecorderEvent(VideoEvent.RecordingStarted);
}
}
protected void stopRecording() {
mRecording = false;
if (mVideoEncoder != null) {
mVideoEncoder.stopRecording();
}
if (listener != null) {
listener.onVideoRecorderEvent(VideoEvent.RecordingStopped);
}
}
public void setEglConfig(EGLConfig eglConfig) {
this.mEGLConfig = eglConfig;
}
public enum VideoEvent {
RecordingStarted,
RecordingStopped
}
public interface VideoRecorderListener {
void onVideoRecorderEvent(VideoEvent videoEvent);
}
內部類的CaptureContext保持跟蹤顯示器和表面,以便輕鬆處理與EGL上下文一起使用的多個表面:
public static class CaptureContext {
EGLDisplay windowDisplay;
EGLSurface windowReadSurface;
EGLSurface windowDrawSurface;
private int mWidth;
private int mHeight;
public void initialize() {
windowDisplay = EGL14.eglGetCurrentDisplay();
windowReadSurface = EGL14.eglGetCurrentSurface(EGL14.EGL_DRAW);
windowDrawSurface = EGL14.eglGetCurrentSurface(EGL14.EGL_READ);
int v[] = new int[1];
EGL14.eglQuerySurface(windowDisplay, windowDrawSurface, EGL14.EGL_WIDTH,
v, 0);
mWidth = v[0];
v[0] = -1;
EGL14.eglQuerySurface(windowDisplay, windowDrawSurface, EGL14.EGL_HEIGHT,
v, 0);
mHeight = v[0];
}
/**
* Returns the surface's width, in pixels.
* <p>
* If this is called on a window surface, and the underlying
* surface is in the process
* of changing size, we may not see the new size right away
* (e.g. in the "surfaceChanged"
* callback). The size should match after the next buffer swap.
*/
public int getWidth() {
if (mWidth < 0) {
int v[] = new int[1];
EGL14.eglQuerySurface(windowDisplay,
windowDrawSurface, EGL14.EGL_WIDTH, v, 0);
mWidth = v[0];
}
return mWidth;
}
/**
* Returns the surface's height, in pixels.
*/
public int getHeight() {
if (mHeight < 0) {
int v[] = new int[1];
EGL14.eglQuerySurface(windowDisplay, windowDrawSurface,
EGL14.EGL_HEIGHT, v, 0);
mHeight = v[0];
}
return mHeight;
}
}
添加視頻編碼類
的VideoEncoderCore類是從Grafika,還有TextureMovieEncoder2類複製。
你能告訴我們VideoRecorder.DEFAULT_BITRATE值嗎?我得到這個運行時錯誤android.media.MediaCodec $ CodecException:錯誤0xfffffc0e – MathaN
我用4M 4000000.從我記得,這個錯誤通常是相關的維度,而不是比特率?也許仔細檢查寬度和高度以確保它們合理。 –
好的,謝謝,我會檢查它:-) – MathaN