2014-06-05 54 views
7

我想調整在SharpDX中使用Desktop Duplication API捕獲的屏幕尺寸。我使用的是Screen Capture sample code from the SharpDX Samples repository,相關部分如下:在SharpDX中調整DXGI資源或Texture2D的尺寸

SharpDX.DXGI.Resource screenResource; 
OutputDuplicateFrameInformation duplicateFrameInformation; 

// Try to get duplicated frame within given time 
duplicatedOutput.AcquireNextFrame(10000, out duplicateFrameInformation, out screenResource); 

if (i > 0) 
{ 
    // copy resource into memory that can be accessed by the CPU 
    using (var screenTexture2D = screenResource.QueryInterface<Texture2D>()) 
    device.ImmediateContext.CopyResource(screenTexture2D, screenTexture); 

    // Get the desktop capture texture 
    var mapSource = device.ImmediateContext.MapSubresource(screenTexture, 0, MapMode.Read, MapFlags.None); 

    System.Diagnostics.Debug.WriteLine(watch.Elapsed); 

    // Create Drawing.Bitmap 
    var bitmap = new System.Drawing.Bitmap(width, height, PixelFormat.Format32bppArgb); 
    var boundsRect = new System.Drawing.Rectangle(0, 0, width, height); 

    // Copy pixels from screen capture Texture to GDI bitmap 
    var mapDest = bitmap.LockBits(boundsRect, ImageLockMode.WriteOnly, bitmap.PixelFormat); 
    var sourcePtr = mapSource.DataPointer; 
    var destPtr = mapDest.Scan0; 
    for (int y = 0; y < height; y++) 
    { 
     // Iterate and write to bitmap... 

我想在將圖像作爲字節數組處理之前調整比實際屏幕大小小得多的圖像大小。我不需要保存圖像,只需要查看字節。我希望能夠相對快速和高效地完成此操作(例如,如果可能,請使用GPU)。

由於要求輸出尺寸與輸入尺寸相同,因此在CopyResource期間我無法縮放。我可以從我的screenTexture2D執行另一個副本的比例嗎?我究竟如何擴展資源 - 我是否使用交換鏈,矩陣轉換或其他功能?

回答

5

如果你是精細調整大小兩個從屏幕電源,你可以做到這一點:

  • 創建RenderTarget/ShaderResource使用較小的質地,選項GenerateMipMaps,屏幕大小相同,mipcount> 1(2表示大小/ 2,3表示具有/4...c)。
  • 複製屏幕紋理到較小紋理
  • DeviceContext.GenerateMipMaps在較小的紋理
  • 複印小紋理的所選mimap的第一紋理貼圖(1:/ 2,2:/4...etc)到臨時紋理(應該也被聲明時,要使用的,即相同的尺寸的mipmap被去)

對原始代碼的快速黑客以生成/ 2質地會是這樣:

[STAThread] 
    private static void Main() 
    { 
     // # of graphics card adapter 
     const int numAdapter = 0; 

     // # of output device (i.e. monitor) 
     const int numOutput = 0; 

     const string outputFileName = "ScreenCapture.bmp"; 

     // Create DXGI Factory1 
     var factory = new Factory1(); 
     var adapter = factory.GetAdapter1(numAdapter); 

     // Create device from Adapter 
     var device = new Device(adapter); 

     // Get DXGI.Output 
     var output = adapter.GetOutput(numOutput); 
     var output1 = output.QueryInterface<Output1>(); 

     // Width/Height of desktop to capture 
     int width = output.Description.DesktopBounds.Width; 
     int height = output.Description.DesktopBounds.Height; 

     // Create Staging texture CPU-accessible 
     var textureDesc = new Texture2DDescription 
           { 
            CpuAccessFlags = CpuAccessFlags.Read, 
            BindFlags = BindFlags.None, 
            Format = Format.B8G8R8A8_UNorm, 
            Width = width/2, 
            Height = height/2, 
            OptionFlags = ResourceOptionFlags.None, 
            MipLevels = 1, 
            ArraySize = 1, 
            SampleDescription = { Count = 1, Quality = 0 }, 
            Usage = ResourceUsage.Staging 
           }; 
     var stagingTexture = new Texture2D(device, textureDesc); 

     // Create Staging texture CPU-accessible 
     var smallerTextureDesc = new Texture2DDescription 
     { 
      CpuAccessFlags = CpuAccessFlags.None, 
      BindFlags = BindFlags.RenderTarget | BindFlags.ShaderResource, 
      Format = Format.B8G8R8A8_UNorm, 
      Width = width, 
      Height = height, 
      OptionFlags = ResourceOptionFlags.GenerateMipMaps, 
      MipLevels = 4, 
      ArraySize = 1, 
      SampleDescription = { Count = 1, Quality = 0 }, 
      Usage = ResourceUsage.Default 
     }; 
     var smallerTexture = new Texture2D(device, smallerTextureDesc); 
     var smallerTextureView = new ShaderResourceView(device, smallerTexture); 

     // Duplicate the output 
     var duplicatedOutput = output1.DuplicateOutput(device); 

     bool captureDone = false; 
     for (int i = 0; !captureDone; i++) 
     { 
      try 
      { 
       SharpDX.DXGI.Resource screenResource; 
       OutputDuplicateFrameInformation duplicateFrameInformation; 

       // Try to get duplicated frame within given time 
       duplicatedOutput.AcquireNextFrame(10000, out duplicateFrameInformation, out screenResource); 

       if (i > 0) 
       { 
        // copy resource into memory that can be accessed by the CPU 
        using (var screenTexture2D = screenResource.QueryInterface<Texture2D>()) 
         device.ImmediateContext.CopySubresourceRegion(screenTexture2D, 0, null, smallerTexture, 0); 

        // Generates the mipmap of the screen 
        device.ImmediateContext.GenerateMips(smallerTextureView); 

        // Copy the mipmap 1 of smallerTexture (size/2) to the staging texture 
        device.ImmediateContext.CopySubresourceRegion(smallerTexture, 1, null, stagingTexture, 0); 

        // Get the desktop capture texture 
        var mapSource = device.ImmediateContext.MapSubresource(stagingTexture, 0, MapMode.Read, MapFlags.None); 

        // Create Drawing.Bitmap 
        var bitmap = new System.Drawing.Bitmap(width/2, height/2, PixelFormat.Format32bppArgb); 
        var boundsRect = new System.Drawing.Rectangle(0, 0, width/2, height/2); 

        // Copy pixels from screen capture Texture to GDI bitmap 
        var mapDest = bitmap.LockBits(boundsRect, ImageLockMode.WriteOnly, bitmap.PixelFormat); 
        var sourcePtr = mapSource.DataPointer; 
        var destPtr = mapDest.Scan0; 
        for (int y = 0; y < height/2; y++) 
        { 
         // Copy a single line 
         Utilities.CopyMemory(destPtr, sourcePtr, width/2 * 4); 

         // Advance pointers 
         sourcePtr = IntPtr.Add(sourcePtr, mapSource.RowPitch); 
         destPtr = IntPtr.Add(destPtr, mapDest.Stride); 
        } 

        // Release source and dest locks 
        bitmap.UnlockBits(mapDest); 
        device.ImmediateContext.UnmapSubresource(stagingTexture, 0); 

        // Save the output 
        bitmap.Save(outputFileName); 

        // Capture done 
        captureDone = true; 
       } 

       screenResource.Dispose(); 
       duplicatedOutput.ReleaseFrame(); 

      } 
      catch (SharpDXException e) 
      { 
       if (e.ResultCode.Code != SharpDX.DXGI.ResultCode.WaitTimeout.Result.Code) 
       { 
        throw e; 
       } 
      } 
     } 

     // Display the texture using system associated viewer 
     System.Diagnostics.Process.Start(Path.GetFullPath(Path.Combine(Environment.CurrentDirectory, outputFileName))); 

     // TODO: We should cleanp up all allocated COM objects here 
    } 
6

您需要將原始的源表面放在GPU內存中,然後將其拖放到較小的表面上。這涉及到簡單的矢量/像素着色器,一些需求簡單的人寧願繞過。

我想看看是否有人爲sharpdx製作了一個精靈lib。它應該是一個常見的「事情」......或使用Direct2D(這更有趣)。由於D2D只是D3D上的用戶模式庫,因此它非常容易與D3D進行互操作。

我從來沒有用過SharpDx,但fFrom記憶,你會做這樣的事情:

1)創建一個ID2D1Device,包裝現有DXGI設備(請確保您的DXGI設備創建標誌具有D3D11_CREATE_DEVICE_BGRA_SUPPORT)

2)從ID2D1Device

3)與ID2D1DeviceContext包裝你的源和目標DXGI表面到D2D位圖:: CreateBitmapFromDxgiSurface獲取ID2D1DeviceContext

4.)目標表面的ID2D1DeviceContext :: SetTarget

5.)BeginDraw,ID2D1DeviceContext :: DrawBitmap,傳遞您的源D2D位圖。 EndDraw

6)保存您的目的地

0

這是一個像素化的例子...

d2d_device_context_h()->BeginDraw(); 
d2d_device_context_h()->SetTarget(mp_ppBitmap1.Get()); 
D2D1_SIZE_F rtSize = mp_ppBitmap1->GetSize(); 
rtSize.height *= (1.0f/cbpx.iPixelsize.y); 
rtSize.width *= (1.0f/cbpx.iPixelsize.x); 
D2D1_RECT_F rtRect = { 0.0f, 0.0f, rtSize.width, rtSize.height }; 
D2D1_SIZE_F rsSize = mp_ppBitmap0->GetSize(); 
D2D1_RECT_F rsRect = { 0.0f, 0.0f, rsSize.width, rsSize.height }; 
d2d_device_context_h()->DrawBitmap(mp_ppBitmap0.Get(), &rtRect, 1.0f, 
D2D1_BITMAP_INTERPOLATION_MODE_LINEAR, &rsRect); 
d2d_device_context_h()->SetTarget(mp_ppBitmap0.Get()); 
d2d_device_context_h()->DrawBitmap(mp_ppBitmap1.Get(), &rsRect, 1.0f, 
D2D1_BITMAP_INTERPOLATION_MODE_NEAREST_NEIGHBOR, &rtRect); 
d2d_device_context_h()->EndDraw(); 

哪裏iPixelsize.xy是「像素化像素」的大小,注意,我縮小BMP時,只使用線性插值而不是當我reenlarge。這會產生像素化效果。