0
我想要的Direct3D 9設備上播放視頻,使用:的Direct3D上傳視頻紋理
- nVLC - 使用紋理視頻設備上實際顯示的幀 - 用於提取的RGB32從文件
- SlimDX幀。
這是我的代碼接收RGB32幀;
_videoWrapper.SetCallback(delegate(Bitmap frame)
{
if (_mainContentSurface == null || _dead)
return;
var bmpData = frame.LockBits(new Rectangle(0, 0, frame.Width, frame.Height), ImageLockMode.ReadOnly, frame.PixelFormat);
var ptr = bmpData.Scan0;
var size = bmpData.Stride * frame.Height;
_mainContentSurface.Buffer = new byte[size];
System.Runtime.InteropServices.Marshal.Copy(ptr, _mainContentSurface.Buffer, 0, size);
_mainContentSurface.SetTexture(_mainContentSurface.Buffer, frame.Width, frame.Height);
_secondaryContentSurface.SetTexture(_mainContentSurface.Buffer, frame.Width, frame.Height); // same buffer to second WINDOW
_mainContentSurface.VideoFrameRate.Value =_videoWrapper.ActualFrameRate;
frame.UnlockBits(bmpData);
});
這裏是我的SetTexture和紋理映射的實際使用情況,以方:
public void SetTexture(byte[] image, int width, int height)
{
if (Context9 != null && Context9.Device != null)
{
if (IsFormClosed)
return;
// rendering is seperate from the "FRAME FETCH" thread, if it makes sense.
// also note that we recreate video texture if needed.
_renderWindow.BeginInvoke(new Action(() =>
{
if (_image == null || _currentVideoTextureWidth != width || _currentVideoTextureHeight != height)
{
if(_image != null)
_image.Dispose();
_image = new Texture(Context9.Device, width, height, 0, Usage.Dynamic, Format.A8R8G8B8,
Pool.Default);
_currentVideoTextureWidth = width;
_currentVideoTextureHeight = height;
if(_image == null)
throw new Exception("Video card does not support textures power of TWO or dynamic textures. Get a video card");
}
//upload data into texture.
var data = _image.LockRectangle(0, LockFlags.None);
data.Data.Write(image, 0, image.Length);
_image.UnlockRectangle(0);
}));
}
}
最後是實際的渲染:
Context9.Device.SetStreamSource(0, _videoVertices, 0, Vertex.SizeBytes);
Context9.Device.VertexFormat = Vertex.Format;
// Setup our texture. Using Textures introduces the texture stage states,
// which govern how Textures get blended together (in the case of multiple
// Textures) and lighting information.
Context9.Device.SetTexture(0, _image);
// The sampler states govern how smooth the texture is displayed.
Context9.Device.SetSamplerState(0, SamplerState.MinFilter, TextureFilter.Linear);
Context9.Device.SetSamplerState(0, SamplerState.MagFilter, TextureFilter.Linear);
Context9.Device.SetSamplerState(0, SamplerState.MipFilter, TextureFilter.Linear);
// Now drawing 2 triangles, for a quad.
Context9.Device.DrawPrimitives(PrimitiveType.TriangleList, 0, 2);
現在,它的工作原理在我的機器上。沒有什麼問題。隨着每個視頻文件和每個位置。但是當我檢查WinXP時,照片被完全破壞了。這是非工作和工作的屏幕快照;
http://www.upload.ee/image/2941734/untitled.PNG
http://www.upload.ee/image/2941762/Untitled2.png
注意,在第一張圖片,他們是_maincontentSurface和_secondaryContentSurface。有誰知道可能是什麼問題?
感謝答案。幾個星期前我發現它。我追蹤它自動生成MIPS是什麼導致圖像被完全破壞+與nVLC硬件加速相結合。這也是將數據寫入緩衝區的問題(不考慮紋理實際間距)。但我設法很好地工作(YUV-> RGB在着色器中完成:])。但是WriteDataPitch()例子很好。我必須弄清楚我自己的。 –