2012-06-06 12 views
4

在我的申請,我想要做的步驟如下:的iOS捕獲屏幕,然後再剪裁併掩蓋結果圖像

1 - 捕捉屏幕,這部分是沒有問題對我來說,我使用以下代碼:

- (UIImage *)captureScreen { 
    UIGraphicsBeginImageContextWithOptions(self.view.frame.size, YES, 0.0f); 
    [self.view.layer renderInContext:UIGraphicsGetCurrentContext()]; 

    UIImage *image = UIGraphicsGetImageFromCurrentImageContext(); 
    UIGraphicsEndImageContext(); 

    return image; 
} 

2 - I剪切的圖像與該功能

- (UIImage *)cropImage(UIImage *)image inRect:(CGRect)rect { 
    CGImageRef imageRef = CGImageCreateWithImageInRect(image.CGImage, rect); 
    UIImage *resultImage = [UIImage imageWithCGImage:imageRef]; 
    CGImageRelease(imageRef); 

    return resultImage; 
} 

3 - 然後我掩模裁剪後的圖像與純黑色和白色掩模

- (UIImage *)maskImage:(UIImage *)image withMask:(UIImage *)maskImage { 

    CGImageRef maskRef = maskImage.CGImage; 
    CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef), 
             CGImageGetHeight(maskRef), 
             CGImageGetBitsPerComponent(maskRef), 
             CGImageGetBitsPerPixel(maskRef), 
             CGImageGetBytesPerRow(maskRef), 
             CGImageGetDataProvider(maskRef), NULL, false); 

    CGImageRef maskedRef = CGImageCreateWithMask([image CGImage], mask); 
    UIImage *resultImage = [UIImage imageWithCGImage:maskedRef]; 
    CGImageRelease(mask); 
    CGImageRelease(maskedRef); 

    return resultImage; 
} 

但是,我得到的結果圖像是在掩模的形狀之外,圖像是黑色而不是透明的。有誰能夠幫助我?

回答

2

我解決了我的問題,這是由於圖像的alpha通道被掩蓋。所以在屏蔽之前,我創建了另一個帶alpha通道的UIImage,並繼續我的步驟。

這對於具有阿爾法創建一個UIImage代碼

- (UIImage *)imageWithAlpha { 
    CGImageRef imageRef = self.CGImage; 
    CGFloat width = CGImageGetWidth(imageRef); 
    CGFloat height = CGImageGetHeight(imageRef); 

    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 
    CGContextRef context = CGBitmapContextCreate(nil, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedFirst); 
    CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef); 

    CGImageRef resultImageRef = CGBitmapContextCreateImage(context); 
    UIImage *resultImage = [UIImage imageWithCGImage:resultImageRef scale:self.scale orientation:self.imageOrientation]; 

    CGContextRelease(context); 
    CGColorSpaceRelease(colorSpace); 
    CGImageRelease(resultImageRef); 

    return resultImage; 
} 
2

這適用於我。希望它也能爲你工作。

- (UIImage*) doImageMask:(UIImage *)mainImage:(UIImage*)maskImage{ 

CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 

CGImageRef maskImageRef = [maskImage CGImage]; 

// create a bitmap graphics context the size of the image 
CGContextRef mainViewContentContext = CGBitmapContextCreate (NULL, maskImage.size.width, maskImage.size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast); 


if (mainViewContentContext == NULL){ 
    return NULL; 
} 

CGFloat ratio = 0; 

ratio = maskImage.size.width/ mainImage.size.width; 

if(ratio * mainImage.size.height < maskImage.size.height) { 
    ratio = maskImage.size.height/ mainImage.size.height; 
} 

CGRect rect1 = {{0, 0}, {maskImage.size.width, maskImage.size.height}}; 
CGRect rect2 = {{-((mainImage.size.width*ratio)-maskImage.size.width)/2 , -((mainImage.size.height*ratio)-maskImage.size.height)/2}, {mainImage.size.width*ratio, mainImage.size.height*ratio}}; 

CGContextClipToMask(mainViewContentContext, rect1, maskImageRef); 
CGContextDrawImage(mainViewContentContext, rect2, mainImage.CGImage); 

// Create CGImageRef of the main view bitmap content, and then 
// release that bitmap context 
CGImageRef newImage = CGBitmapContextCreateImage(mainViewContentContext); 
CGContextRelease(mainViewContentContext); 

UIImage *theImage = [UIImage imageWithCGImage:newImage]; 

CGImageRelease(newImage); 

// return the image 
return theImage; 
}