2013-02-04 43 views
2

有沒有辦法在UIImage上獲得運動模糊效果? 我試過GPUImage,Filtrr和iOS Core Image,但所有這些都有規律的模糊 - 沒有運動模糊。iOS上的UIImage上的運動模糊效果

我也試過UIImage-DSP,但它的運動模糊幾乎不可見。我需要更強大的東西。

+0

看看http://stackoverflow.com/questions/7475610/how-to-do-a-motion-blur-effect-on-an-uiimageview-in-monotouch – howanghk

+0

我試過UIImage-DSP和運動模糊效果幾乎不可見。我需要更強大的東西。 – YogevSitton

回答

6

正如我在倉庫評論,我只是增加了運動和縮放模糊到GPUImage。這些是GPUImageMotionBlurFilter和GPUImageZoomBlurFilter類。這是變焦模糊的示例:

GPUImage zoom blur

對於運動模糊,我做在單個方向上的9-擊中高斯模糊。

頂點::

attribute vec4 position; 
attribute vec4 inputTextureCoordinate; 

uniform highp vec2 directionalTexelStep; 

varying vec2 textureCoordinate; 
varying vec2 oneStepBackTextureCoordinate; 
varying vec2 twoStepsBackTextureCoordinate; 
varying vec2 threeStepsBackTextureCoordinate; 
varying vec2 fourStepsBackTextureCoordinate; 
varying vec2 oneStepForwardTextureCoordinate; 
varying vec2 twoStepsForwardTextureCoordinate; 
varying vec2 threeStepsForwardTextureCoordinate; 
varying vec2 fourStepsForwardTextureCoordinate; 

void main() 
{ 
    gl_Position = position; 

    textureCoordinate = inputTextureCoordinate.xy; 
    oneStepBackTextureCoordinate = inputTextureCoordinate.xy - directionalTexelStep; 
    twoStepsBackTextureCoordinate = inputTextureCoordinate.xy - 2.0 * directionalTexelStep; 
    threeStepsBackTextureCoordinate = inputTextureCoordinate.xy - 3.0 * directionalTexelStep; 
    fourStepsBackTextureCoordinate = inputTextureCoordinate.xy - 4.0 * directionalTexelStep; 
    oneStepForwardTextureCoordinate = inputTextureCoordinate.xy + directionalTexelStep; 
    twoStepsForwardTextureCoordinate = inputTextureCoordinate.xy + 2.0 * directionalTexelStep; 
    threeStepsForwardTextureCoordinate = inputTextureCoordinate.xy + 3.0 * directionalTexelStep; 
    fourStepsForwardTextureCoordinate = inputTextureCoordinate.xy + 4.0 * directionalTexelStep; 
} 

片段:

precision highp float; 

uniform sampler2D inputImageTexture; 

varying vec2 textureCoordinate; 
varying vec2 oneStepBackTextureCoordinate; 
varying vec2 twoStepsBackTextureCoordinate; 
varying vec2 threeStepsBackTextureCoordinate; 
varying vec2 fourStepsBackTextureCoordinate; 
varying vec2 oneStepForwardTextureCoordinate; 
varying vec2 twoStepsForwardTextureCoordinate; 
varying vec2 threeStepsForwardTextureCoordinate; 
varying vec2 fourStepsForwardTextureCoordinate; 

void main() 
{ 
    lowp vec4 fragmentColor = texture2D(inputImageTexture, textureCoordinate) * 0.18; 
    fragmentColor += texture2D(inputImageTexture, oneStepBackTextureCoordinate) * 0.15; 
    fragmentColor += texture2D(inputImageTexture, twoStepsBackTextureCoordinate) * 0.12; 
    fragmentColor += texture2D(inputImageTexture, threeStepsBackTextureCoordinate) * 0.09; 
    fragmentColor += texture2D(inputImageTexture, fourStepsBackTextureCoordinate) * 0.05; 
    fragmentColor += texture2D(inputImageTexture, oneStepForwardTextureCoordinate) * 0.15; 
    fragmentColor += texture2D(inputImageTexture, twoStepsForwardTextureCoordinate) * 0.12; 
    fragmentColor += texture2D(inputImageTexture, threeStepsForwardTextureCoordinate) * 0.09; 
    fragmentColor += texture2D(inputImageTexture, fourStepsForwardTextureCoordinate) * 0.05; 

    gl_FragColor = fragmentColor; 
} 

作爲一種優化,我計算紋理樣本之間的步長大小的片段着色器的外這是通過使用下面的頂點和片段着色器實現通過使用角度,模糊大小和圖像尺寸。然後將它傳遞給頂點着色器,以便我可以計算紋理採樣位置並在片段着色器中插入它們。這可以避免iOS設備上的相關紋理讀取。

縮放模糊要慢得多,因爲我仍然在片段着色器中進行這些計算。毫無疑問,我可以優化這一點,但我還沒有嘗試過。縮放模糊使用9點高斯模糊,其中方向和每個樣本的偏移距離作爲像素與模糊中心的位置的函數而變化。

它使用以下片段着色器(和一個標準的直通頂點着色器):

varying highp vec2 textureCoordinate; 

uniform sampler2D inputImageTexture; 

uniform highp vec2 blurCenter; 
uniform highp float blurSize; 

void main() 
{ 
    // TODO: Do a more intelligent scaling based on resolution here 
    highp vec2 samplingOffset = 1.0/100.0 * (blurCenter - textureCoordinate) * blurSize; 

    lowp vec4 fragmentColor = texture2D(inputImageTexture, textureCoordinate) * 0.18; 
    fragmentColor += texture2D(inputImageTexture, textureCoordinate + samplingOffset) * 0.15; 
    fragmentColor += texture2D(inputImageTexture, textureCoordinate + (2.0 * samplingOffset)) * 0.12; 
    fragmentColor += texture2D(inputImageTexture, textureCoordinate + (3.0 * samplingOffset)) * 0.09; 
    fragmentColor += texture2D(inputImageTexture, textureCoordinate + (4.0 * samplingOffset)) * 0.05; 
    fragmentColor += texture2D(inputImageTexture, textureCoordinate - samplingOffset) * 0.15; 
    fragmentColor += texture2D(inputImageTexture, textureCoordinate - (2.0 * samplingOffset)) * 0.12; 
    fragmentColor += texture2D(inputImageTexture, textureCoordinate - (3.0 * samplingOffset)) * 0.09; 
    fragmentColor += texture2D(inputImageTexture, textureCoordinate - (4.0 * samplingOffset)) * 0.05; 

    gl_FragColor = fragmentColor; 
} 

注意,這兩個模糊的在9個樣品由於性能原因硬編碼。這意味着在較大的模糊尺寸下,您將開始在這裏看到來自有限樣本的文物。對於較大的模糊,您需要多次運行這些濾鏡或將它們擴展以支持更多的高斯樣本。但是,由於iOS設備上紋理採樣帶寬有限,更多采樣會導致渲染時間更慢。

+0

哇,不錯的一個BradLarson! – howanghk