2012-03-13 90 views
1

我正在爲iPhone和iPad開發繪製應用[參考來自GLPaint應用]。在這個應用程序中,我通過屏幕上的繪圖線根據用戶觸摸的位置在繪畫圖像中填充顏色。應用程序正常工作的iPhone。在ipad 而不進行縮放上的油漆視圖線是正確縮放的paintView行之後[無象素失真]但扭曲的OpenGL ES的像素即含量不高分辨率。繪畫應用的高分辨率內容在iPad設備上使用OpenGL ES

我使用下面的代碼初始化漆觀點:

-(id)initWithCoder:(NSCoder*)coder { 
    CGImageRef  brushImage; 
    CGContextRef brushContext; 
    GLubyte   *brushData; 
    size_t   width, height; 
    CGFloat   components[3]; 

    if ((self = [super initWithCoder:coder])) { 
     CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer; 
     eaglLayer.opaque = NO; 
     eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:YES], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil]; 
     context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES1]; 

     if (!context || ![EAGLContext setCurrentContext:context]) { 
     return nil; 
     } 

     if(UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad) 
     { 
     brushImage = [UIImage imageNamed:@"circle 64.png"].CGImage; 
     } 
     else { 
     brushImage = [UIImage imageNamed:@"flower 128.png"].CGImage; 
     } 

     // Get the width and height of the image 
     width = CGImageGetWidth(brushImage) ; 
     height = CGImageGetHeight(brushImage) ; 

     if(brushImage) { 
     // Allocate memory needed for the bitmap context 
     brushData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte)); 

     // Use the bitmatp creation function provided by the Core Graphics framework. 
     brushContext = CGBitmapContextCreate(brushData, width, height, 8, width * 4, CGImageGetColorSpace(brushImage), kCGImageAlphaPremultipliedLast); 

     // After you create the context, you can draw the image to the context. 
     CGContextDrawImage(brushContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), brushImage); 

     // You don't need the context at this point, so you need to release it to avoid memory leaks. 
     CGContextRelease(brushContext); 

     // Use OpenGL ES to generate a name for the texture. 
     glGenTextures(1, &brushTexture); 

     // Bind the texture name. 
     glBindTexture(GL_TEXTURE_2D, brushTexture); 

     // Set the texture parameters to use a minifying filter and a linear filer (weighted average) 
     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); 

     // Specify a 2D texture image, providing the a pointer to the image data in memory 
     glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData); 

     // Release the image data; it's no longer needed 
     free(brushData); 
     } 

     CGFloat scale; 


     if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad) 
     { 
     NSLog(@"IPAd"); 
     self.contentScaleFactor=1.0; 
     scale = self.contentScaleFactor; 
     } 

     else { 
     // NSLog(@"IPHone"); 
     self.contentScaleFactor=2.0; 
     } 

     //scale = 2.000000; 

     // Setup OpenGL states 
     glMatrixMode(GL_PROJECTION); 
     CGRect frame = self.bounds; 
     NSLog(@"Scale %f", scale); 
     glOrthof(0, (frame.size.width) * scale, 0, (frame.size.height) * scale, -1, 1); 
     glViewport(0, 0, (frame.size.width) * scale, (frame.size.height) * scale); 
     glMatrixMode(GL_MODELVIEW); 
     glDisable(GL_DITHER); 
     glEnable(GL_BLEND); 
     glEnable(GL_TEXTURE_2D); 
     glEnableClientState(GL_VERTEX_ARRAY); 
     glEnable(GL_BLEND); 

     // Set a blending function appropriate for premultiplied alpha pixel data 
     glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA); 
     glEnable(GL_POINT_SPRITE_OES); 
     glTexEnvf(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE); 
     glPointSize(width/kBrushScale); 

     // Make sure to start with a cleared buffer 
     needsErase = YES; 

     // Define a starting color 
     HSL2RGB((CGFloat) 0.0/(CGFloat)kPaletteSize, kSaturation, kLuminosity, &components[0], &components[1], &components[2]); 
     [self setBrushColorWithRed:245.0f green:245.0f blue:0.0f]; 
     boolEraser=NO; 
    } 

    return self; 
} 

創建幀緩存

-(BOOL)createFramebuffer { 
    // Generate IDs for a framebuffer object and a color renderbuffer 
    glGenFramebuffersOES(1, &viewFramebuffer); 
    glGenRenderbuffersOES(1, &viewRenderbuffer); 

    glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer); 
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer); 

    // This call associates the storage for the current render buffer with the EAGLDrawable (our CAEAGLLayer) 
    // allowing us to draw into a buffer that will later be rendered to screen wherever the layer is (which corresponds with our view). 
    [context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:(id<EAGLDrawable>)self.layer]; 
    glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, viewRenderbuffer); 

    // Get the size of the backing CAEAGLLayer 
    glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth); 
    glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight); 

    // For this sample, we also need a depth buffer, so we'll create and attach one via another renderbuffer. 
    glGenRenderbuffersOES(1, &depthRenderbuffer); 
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthRenderbuffer); 
    glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_DEPTH_COMPONENT16_OES, backingWidth, backingHeight); 
    glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, depthRenderbuffer); 

    if (glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) != GL_FRAMEBUFFER_COMPLETE_OES) 
    { 
     NSLog(@"failed to make complete framebuffer object %x", glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES)); 
     return NO; 
    } 

    return YES; 
} 

線畫出使用下面的代碼

-(void)renderLineFromPoint:(CGPoint)start toPoint:(CGPoint)end { 
    static GLfloat*  vertexBuffer = NULL; 
    static NSUInteger vertexMax = 64; 
    NSUInteger   vertexCount = 0, 
         count, 
         i; 

    [EAGLContext setCurrentContext:context]; 
    glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer); 

    // Convert locations from Points to Pixels 
    //CGFloat scale = self.contentScaleFactor; 
    CGFloat scale; 
    scale=self.contentScaleFactor; 
    NSLog(@"Scale %f",scale); 

    start.x *= scale; 
    start.y *= scale; 
    end.x *= scale; 
    end.y *= scale; 

    float dx = end.x - start.x; 
    float dy = end.y - start.y; 
    float dist = (sqrtf(dx * dx + dy * dy)/ kBrushPixelStep); 

    // Allocate vertex array buffer 
    if(vertexBuffer == NULL) 
     // vertexBuffer = malloc(vertexMax * 2 * sizeof(GLfloat)); 
     vertexBuffer = malloc(vertexMax * 2 * sizeof(GLfloat)); 
     count = MAX(ceilf(dist), 1); 

     //NSLog(@"count %d",count); 

     for(i = 0; i < count; ++i) { 
     if (vertexCount == vertexMax) { 
      vertexMax = 2 * vertexMax; 
      vertexBuffer = realloc(vertexBuffer, vertexMax * 2 * sizeof(GLfloat)); 
      // NSLog(@"if loop"); 
     } 

     vertexBuffer[2 * vertexCount + 0] = start.x + (dx) * ((GLfloat)i/(GLfloat)count); 
     vertexBuffer[2 * vertexCount + 1] = start.y + (dy) * ((GLfloat)i/(GLfloat)count); 

     vertexCount += 1; 
     } 

    // Render the vertex array 
    glVertexPointer(2, GL_FLOAT, 0, vertexBuffer); 
    glDrawArrays(GL_POINTS, 0, vertexCount); 
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer); 
    [context presentRenderbuffer:GL_RENDERBUFFER_OES]; 
} 

對於油漆的iPad設備內容視圖對於普通視圖來說是適當的高分辨率,但是在縮放後我不會變高線的油漆視圖像素的高分辨率內容看起來失真。

我曾試圖改變ContentScaleFactor以及上面代碼的尺度參數看出差別,但沒有發揮預期。 IPad支持1.0 & 1.5的contentScaleFactor,當我設置contentScaleFactor = 2時Paint view不能顯示線條,它顯示奇怪的虛線。

有什麼辦法讓OpenGL的內容高分辨率?

回答

1

我不確定高分辨率是什麼意思。 opengl是一個帶有位圖支持渲染系統的矢量庫。後備存儲將有您使用的是創建在渲染層的像素尺寸(乘以內容比例因子):它一旦建立就沒有辦法更改分辨率

- (BOOL)renderbufferStorage:(NSUInteger)target fromDrawable:(id<EAGLDrawable>)drawable 

,也不這樣做通常會有意義,每個屏幕像素一個渲染緩衝區像素最有意義。

很難知道你想不知道你在說什麼縮放解決什麼問題。我假設你已經在UIScrollView中設置了CAEAGLLayer,並且你看到了像素僞像。這是不可避免的,它還能如何工作?

如果你希望你的線條流暢

,您需要使用三角形帶齧合的邊緣,這將提供抗鋸齒alpha混合,以實現它們。代替縮放圖層本身,您只需通過縮放頂點來「縮放」內容,但保持CAEAGLLayer的大小相同。這將消除像素化並給出Purdy alpha混合邊緣。

+0

OpenGL是一個矢量庫 – rockeye 2012-03-13 13:39:48

+0

雖然基本的線條和點原語基本上在iOS上不可用,但您已經正確編輯了。 – Tark 2012-03-13 17:36:29

1

簡短的回答是YES ,你可以有「高分辨率」的內容。

但是在解決問題之前,您必須清楚地理解問題。這是一個長的答案:

您使用的筆刷具有特定的尺寸(64或128)。只要您的虛擬紙張(您繪製的區域)將顯示大於1個屏幕像素的像素,就會開始看到「失真」。例如,在您最喜歡的圖片瀏覽器中,如果您打開其中一個畫筆並放大圖片,也會發生失真。你不能避免這種情況,除非使用vertor-brushes(這不是這個答案的範圍,而且更復雜)。

最快的方法是使用更多的細節畫筆,但它是一個模糊,如果你變得足夠多,紋理也會看起來扭曲。

您還可以添加使用glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_ MAG _Filter,GL_LINEAR)一放大濾波器; 。你在你的示例中使用了MIN,添加這個將會使紋理更加光滑

+0

嘿rockeye,謝謝你的回覆。在這段代碼中,我已經在UIScrollview中添加了paintView用於縮放目的。在使用GL_TEXTURE_MAG_FILTER之後,我使用了一個圓形的brushImage,它顯示了矩形形狀[我認爲它顯示了64 * 64的矩形,這是brushImage的大小]。我試圖增加brushImage的大小[128 * 128],但它只增加paintview的大小brushImage而不是像素質量。我不知道如何改進brushImage像素質量,以便它不會在縮放時顯示扭曲的內容。 – user392406 2012-03-14 05:50:01

+0

這是一個相當複雜的問題。很快,您將不得不更改代碼以區分工作空間中的筆刷大小(使用它自己的大小和分辨率)和筆刷分辨率。例如,使用1024個畫筆圖片並生成mipmap。在你的畫家中,用戶可以選擇畫筆工具的大小(與工作空間有關)。如果工作空間res>刷子分辨率,它會看起來很好。玩弄一些流行的圖像編輯器與分辨率和畫筆,你會發現如何實現你想要的。 – rockeye 2012-03-14 09:21:13

相關問題