2011-07-14 87 views
3

我有這個奇怪的問題... 我不得不捕捉屏幕數據,並使用下面的代碼將其轉換爲圖像..此代碼正常工作在iphone/iPad的模擬器和iPhone設備,但不是在iPad上。 iphone設備是具有IOS 3.1.1版和ipad是iOS 4.2的...screenShot代碼不工作在ipad上,在iPhone上工作

- (UIImage *)screenshotImage { 
CGRect screenBounds = [[UIScreen mainScreen] bounds]; 
int backingWidth = screenBounds.size.width; 
int backingHeight =screenBounds.size.height; 
NSInteger myDataLength = backingWidth * backingHeight * 4; 
GLuint *buffer = (GLuint *) malloc(myDataLength); 
glReadPixels(0, 0, backingWidth, backingHeight, GL_RGBA4, GL_UNSIGNED_BYTE, buffer); 
for(int y = 0; y < backingHeight/2; y++) { 
    for(int xt = 0; xt < backingWidth; xt++) { 
     GLuint top = buffer[y * backingWidth + xt]; 
     GLuint bottom = buffer[(backingHeight - 1 - y) * backingWidth + xt]; 
     buffer[(backingHeight - 1 - y) * backingWidth + xt] = top; 
     buffer[y * backingWidth + xt] = bottom; 
    } 
} 
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, myDataLength, releaseScreenshotData); 
const int bitsPerComponent = 8; 
const int bitsPerPixel = 4 * bitsPerComponent; 
const int bytesPerRow = 4 * backingWidth; 

CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB(); 
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault; 
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault; 
CGImageRef imageRef = CGImageCreate(backingWidth,backingHeight, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent); 
CGColorSpaceRelease(colorSpaceRef); 
CGDataProviderRelease(provider); 

UIImage *myImage = [UIImage imageWithCGImage:imageRef]; 
CGImageRelease(imageRef); 

// myImage = [self addIconToImage:myImage]; 
return myImage;}  

任何想法怎麼回事錯了??

回答

2

這兩行不匹配

NSInteger myDataLength = backingWidth * backingHeight * 4; 

glReadPixels(0, 0, backingWidth, backingHeight, GL_RGBA4, GL_UNSIGNED_BYTE, buffer); 

GL_RGB4意味着每個通道4位,但是你分配爲每通道8位。正確的標記是GL_RGB8。在iPhone上GL_RGB4可能不受支持,並可能退回到GL_RGBA

還要確保你正在讀取正確的緩衝區(正面與左邊與任何(意外)綁定的FBO)。我建議在進行緩衝區交換之前從後臺緩衝區讀取數據。

+0

ķ感謝快速回復..現在我已經改變GL_RGB4到GL_RGBA。代碼在iPhone設備上運行良好,但不是在iPad設備上任何想法是什麼問題....我使用相同的代碼爲這兩個設備 – Tornado

+0

將其更改爲GL_RGBA8,而不僅僅是GL_RGBA(注意'8')。 – datenwolf

+0

嘿我的Xcode沒有給我GL_RGBA8的選項,而是給GL_RGBA8_OES作爲選項..當我使用這個GL_RGBA8_OES它給編譯時錯誤「」GL_RGBA8_OES沒有在這個範圍內定義「」 – Tornado

0

對於ios 4或更高版本im使用多重採樣技術進行抗鋸齒.... glReadpixels()無法直接從multiSampled FBO讀取,您需要將其解析爲單個採樣的緩衝區,然後嘗試讀取它...請請參考以下職位: -

Reading data using glReadPixel() with multisampling

0

截圖來自OpenGL ES的蘋果文檔

- (UIImage*)snapshot:(UIView*)eaglview 
{ 
    GLint backingWidth, backingHeight; 

    // Bind the color renderbuffer used to render the OpenGL ES view 
    // If your application only creates a single color renderbuffer which is already bound at this point, 
    // this call is redundant, but it is needed if you're dealing with multiple renderbuffers. 
    // Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class. 
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, _colorRenderbuffer); 

    // Get the size of the backing CAEAGLLayer 
    glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth); 
    glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight); 

    NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight; 
    NSInteger dataLength = width * height * 4; 
    GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte)); 

    // Read pixel data from the framebuffer 
    glPixelStorei(GL_PACK_ALIGNMENT, 4); 
    glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data); 

    // Create a CGImage with the pixel data 
    // If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel 
    // otherwise, use kCGImageAlphaPremultipliedLast 
    CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL); 
    CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB(); 
    CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast, 
            ref, NULL, true, kCGRenderingIntentDefault); 

    // OpenGL ES measures data in PIXELS 
    // Create a graphics context with the target size measured in POINTS 
    NSInteger widthInPoints, heightInPoints; 
    if (NULL != UIGraphicsBeginImageContextWithOptions) { 
     // On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration 
     // Set the scale parameter to your OpenGL ES view's contentScaleFactor 
     // so that you get a high-resolution snapshot when its value is greater than 1.0 
     CGFloat scale = eaglview.contentScaleFactor; 
     widthInPoints = width/scale; 
     heightInPoints = height/scale; 
     UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale); 
    } 
    else { 
     // On iOS prior to 4, fall back to use UIGraphicsBeginImageContext 
     widthInPoints = width; 
     heightInPoints = height; 
     UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints)); 
    } 

    CGContextRef cgcontext = UIGraphicsGetCurrentContext(); 

    // UIKit coordinate system is upside down to GL/Quartz coordinate system 
    // Flip the CGImage by rendering it to the flipped bitmap context 
    // The size of the destination area is measured in POINTS 
    CGContextSetBlendMode(cgcontext, kCGBlendModeCopy); 
    CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref); 

    // Retrieve the UIImage from the current context 
    UIImage *image = UIGraphicsGetImageFromCurrentImageContext(); 

    UIGraphicsEndImageContext(); 

    // Clean up 
    free(data); 
    CFRelease(ref); 
    CFRelease(colorspace); 
    CGImageRelease(iref); 

    return image; 
} 
相關問題