2010-07-19 135 views

回答

10

您不能直接訪問原始數據,但通過獲取此圖像的CGImage,您可以訪問它。這裏是,回答你的問題,其他人則可能有關於詳細的圖像處理另一個問題的鏈接:CGImage

85

試試這個非常簡單的代碼:

我用來檢測我的迷宮遊戲牆(唯一的信息我需要的是alpha通道,但是我包括代碼來獲取等多種顏色供你):

- (BOOL)isWallPixel:(UIImage *)image xCoordinate:(int)x yCoordinate:(int)y { 

    CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage)); 
    const UInt8* data = CFDataGetBytePtr(pixelData); 

    int pixelInfo = ((image.size.width * y) + x) * 4; // The image is png 

    //UInt8 red = data[pixelInfo];   // If you need this info, enable it 
    //UInt8 green = data[(pixelInfo + 1)]; // If you need this info, enable it 
    //UInt8 blue = data[pixelInfo + 2]; // If you need this info, enable it 
    UInt8 alpha = data[pixelInfo + 3];  // I need only this info for my maze game 
    CFRelease(pixelData); 

    //UIColor* color = [UIColor colorWithRed:red/255.0f green:green/255.0f blue:blue/255.0f alpha:alpha/255.0f]; // The pixel color info 

    if (alpha) return YES; 
    else return NO; 

} 
+0

你能幫助我在爭取像素相對於圖像尺寸的位置。我使用這個來定位遊戲中的對象。謝謝。 – tallen11 2011-12-16 00:36:56

+0

對不起,但沒有理解你的問題。你可以說得更詳細點嗎? 發表一些示例代碼?你想在圖像中找到像素嗎? – 2012-04-04 01:24:40

+0

什麼是x和y? – 2012-06-29 14:20:21

16

OnTouch

-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event 
{ 
    UITouch *touch = [[touches allObjects] objectAtIndex:0]; 
    CGPoint point1 = [touch locationInView:self.view]; 
    touch = [[event allTouches] anyObject]; 
    if ([touch view] == imgZoneWheel) 
    { 
     CGPoint location = [touch locationInView:imgZoneWheel]; 
     [self getPixelColorAtLocation:location]; 
     if(alpha==255) 
     { 
      NSLog(@"In Image Touch view alpha %d",alpha); 
      [self translateCurrentTouchPoint:point1.x :point1.y]; 
      [imgZoneWheel setImage:[UIImage imageNamed:[NSString stringWithFormat:@"blue%d.png",GrndFild]]]; 
     } 
    } 
} 



- (UIColor*) getPixelColorAtLocation:(CGPoint)point 
{ 

    UIColor* color = nil; 

    CGImageRef inImage; 

    inImage = imgZoneWheel.image.CGImage; 


    // Create off screen bitmap context to draw the image into. Format ARGB is 4 bytes for each pixel: Alpa, Red, Green, Blue 
    CGContextRef cgctx = [self createARGBBitmapContextFromImage:inImage]; 
    if (cgctx == NULL) { return nil; /* error */ } 

    size_t w = CGImageGetWidth(inImage); 
    size_t h = CGImageGetHeight(inImage); 
    CGRect rect = {{0,0},{w,h}}; 


    // Draw the image to the bitmap context. Once we draw, the memory 
    // allocated for the context for rendering will then contain the 
    // raw image data in the specified color space. 
    CGContextDrawImage(cgctx, rect, inImage); 

    // Now we can get a pointer to the image data associated with the bitmap 
    // context. 
    unsigned char* data = CGBitmapContextGetData (cgctx); 
    if (data != NULL) { 
     //offset locates the pixel in the data from x,y. 
     //4 for 4 bytes of data per pixel, w is width of one row of data. 
     int offset = 4*((w*round(point.y))+round(point.x)); 
     alpha = data[offset]; 
     int red = data[offset+1]; 
     int green = data[offset+2]; 
     int blue = data[offset+3]; 
     color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha/255.0f)]; 
    } 

    // When finished, release the context 
    //CGContextRelease(cgctx); 
    // Free image data memory for the context 
    if (data) { free(data); } 

    return color; 
} 

- (CGContextRef) createARGBBitmapContextFromImage:(CGImageRef)inImage 
{ 
    CGContextRef context = NULL; 
    CGColorSpaceRef colorSpace; 
    void *   bitmapData; 
    int    bitmapByteCount; 
    int    bitmapBytesPerRow; 

    // Get image width, height. We'll use the entire image. 
    size_t pixelsWide = CGImageGetWidth(inImage); 
    size_t pixelsHigh = CGImageGetHeight(inImage); 

    // Declare the number of bytes per row. Each pixel in the bitmap in this 
    // example is represented by 4 bytes; 8 bits each of red, green, blue, and 
    // alpha. 
    bitmapBytesPerRow = (pixelsWide * 4); 
    bitmapByteCount  = (bitmapBytesPerRow * pixelsHigh); 

    // Use the generic RGB color space. 
    colorSpace = CGColorSpaceCreateDeviceRGB(); 

    if (colorSpace == NULL) 
    { 
     fprintf(stderr, "Error allocating color space\n"); 
     return NULL; 
    } 

    // Allocate memory for image data. This is the destination in memory 
    // where any drawing to the bitmap context will be rendered. 
    bitmapData = malloc(bitmapByteCount); 
    if (bitmapData == NULL) 
    { 
     fprintf (stderr, "Memory not allocated!"); 
     CGColorSpaceRelease(colorSpace); 
     return NULL; 
    } 

    // Create the bitmap context. We want pre-multiplied ARGB, 8-bits 
    // per component. Regardless of what the source image format is 
    // (CMYK, Grayscale, and so on) it will be converted over to the format 
    // specified here by CGBitmapContextCreate. 
    context = CGBitmapContextCreate (bitmapData, 
            pixelsWide, 
            pixelsHigh, 
            8,  // bits per component 
            bitmapBytesPerRow, 
            colorSpace, 
            kCGImageAlphaPremultipliedFirst); 
    if (context == NULL) 
    { 
     free (bitmapData); 
     fprintf (stderr, "Context not created!"); 
    } 

    // Make sure and release colorspace before returning 
    CGColorSpaceRelease(colorSpace); 

    return context; 
} 
+0

'point = CGPointMake(point.x * image.scale,point.y * image.scale);' – uranpro 2017-11-02 08:54:33

0
所有創建和附加輕敲手勢識別的

首先,請允許允許用戶交互:

UITapGestureRecognizer * tapRecognizer = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(tapGesture:)]; 
[self.label addGestureRecognizer:tapRecognizer]; 
self.label.userInteractionEnabled = YES; 

現在實行​​

- (void)tapGesture:(UITapGestureRecognizer *)recognizer 
{ 
    CGPoint point = [recognizer locationInView:self.label]; 

    UIGraphicsBeginImageContext(self.label.bounds.size); 
    CGContextRef context = UIGraphicsGetCurrentContext(); 
    [self.label.layer renderInContext:context]; 

    int bpr = CGBitmapContextGetBytesPerRow(context); 
    unsigned char * data = CGBitmapContextGetData(context); 
    if (data != NULL) 
    { 
     int offset = bpr*round(point.y) + 4*round(point.x); 
     int blue = data[offset+0]; 
     int green = data[offset+1]; 
     int red = data[offset+2]; 
     int alpha = data[offset+3]; 

     NSLog(@"%d %d %d %d", alpha, red, green, blue); 

     if (alpha == 0) 
     { 
      // Here is tap out of text 
     } 
     else 
     { 
      // Here is tap right into text 
     } 
    } 

    UIGraphicsEndImageContext(); 
} 

這將適用於的UILabel與透明的背景下,如果這不是你想要的你可以比較阿爾法,紅色,綠色,藍色與self.label.backgroundColor ...

+1

這是怎麼回事輕擊手勢? – amleszk 2017-06-13 06:17:28

9

這是一個通用的方法獲取像素顏色的UI圖像,基於米納斯佩特森'回答:

- (UIColor*)pixelColorInImage:(UIImage*)image atX:(int)x atY:(int)y { 

    CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage)); 
    const UInt8* data = CFDataGetBytePtr(pixelData); 

    int pixelInfo = ((image.size.width * y) + x) * 4; // 4 bytes per pixel 

    UInt8 red = data[pixelInfo + 0]; 
    UInt8 green = data[pixelInfo + 1]; 
    UInt8 blue = data[pixelInfo + 2]; 
    UInt8 alpha = data[pixelInfo + 3]; 
    CFRelease(pixelData); 

    return [UIColor colorWithRed:red /255.0f 
          green:green/255.0f 
          blue:blue /255.0f 
          alpha:alpha/255.0f]; 
} 

請注意,X和Y可以交換;此函數直接訪問底層位圖,並不考慮可能是UIImage的一部分的旋轉。

+0

有沒有辦法將圖像與這些顏色數字放在一起? – anivader 2016-05-04 10:52:32

+1

這個函數不考慮格式,這是我的BGR格式。 – Andy 2016-11-25 16:55:42

7
- (UIColor *)colorAtPixel:(CGPoint)point inImage:(UIImage *)image { 

    if (!CGRectContainsPoint(CGRectMake(0.0f, 0.0f, image.size.width, image.size.height), point)) { 
     return nil; 
    } 

    // Create a 1x1 pixel byte array and bitmap context to draw the pixel into. 
    NSInteger pointX = trunc(point.x); 
    NSInteger pointY = trunc(point.y); 
    CGImageRef cgImage = image.CGImage; 
    NSUInteger width = image.size.width; 
    NSUInteger height = image.size.height; 
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 
    int bytesPerPixel = 4; 
    int bytesPerRow = bytesPerPixel * 1; 
    NSUInteger bitsPerComponent = 8; 
    unsigned char pixelData[4] = { 0, 0, 0, 0 }; 
    CGContextRef context = CGBitmapContextCreate(pixelData, 1, 1, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big); 
    CGColorSpaceRelease(colorSpace); 
    CGContextSetBlendMode(context, kCGBlendModeCopy); 

    // Draw the pixel we are interested in onto the bitmap context 
    CGContextTranslateCTM(context, -pointX, pointY-(CGFloat)height); 
    CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, (CGFloat)width, (CGFloat)height), cgImage); 
    CGContextRelease(context); 

    // Convert color values [0..255] to floats [0.0..1.0] 
    CGFloat red = (CGFloat)pixelData[0]/255.0f; 
    CGFloat green = (CGFloat)pixelData[1]/255.0f; 
    CGFloat blue = (CGFloat)pixelData[2]/255.0f; 
    CGFloat alpha = (CGFloat)pixelData[3]/255.0f; 
    return [UIColor colorWithRed:red green:green blue:blue alpha:alpha]; 
} 
+0

我認爲結果是錯誤的,因爲位圖上下文的alpha信息是* kCGImageAlphaPremultipliedLast *。但是,當您檢索像素顏色時,將其視爲非預乘值。 – Swordsfrog 2015-10-28 08:14:09

4

一些基於Minas答案的Swift代碼。我已經擴展的UIImage使其到處訪問,並且我已經增加了一些簡單的邏輯來猜測基於所述像素跨度的圖像格式(1,3,或4)

夫特3:

public extension UIImage { 
    func getPixelColor(point: CGPoint) -> UIColor { 
    guard let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(self.CGImage)) else { 
     return UIColor.clearColor() 
    } 
    let data = CFDataGetBytePtr(pixelData) 
    let x = Int(point.x) 
    let y = Int(point.y) 
    let index = Int(self.size.width) * y + x 
    let expectedLengthA = Int(self.size.width * self.size.height) 
    let expectedLengthRGB = 3 * expectedLengthA 
    let expectedLengthRGBA = 4 * expectedLengthA 
    let numBytes = CFDataGetLength(pixelData) 
    switch numBytes { 
    case expectedLengthA: 
     return UIColor(red: 0, green: 0, blue: 0, alpha: CGFloat(data[index])/255.0) 
    case expectedLengthRGB: 
     return UIColor(red: CGFloat(data[3*index])/255.0, green: CGFloat(data[3*index+1])/255.0, blue: CGFloat(data[3*index+2])/255.0, alpha: 1.0) 
    case expectedLengthRGBA: 
     return UIColor(red: CGFloat(data[4*index])/255.0, green: CGFloat(data[4*index+1])/255.0, blue: CGFloat(data[4*index+2])/255.0, alpha: CGFloat(data[4*index+3])/255.0) 
    default: 
     // unsupported format 
     return UIColor.clearColor() 
    } 
    } 
} 

更新了雨燕4:

func getPixelColor(_ image:UIImage, _ point: CGPoint) -> UIColor { 
    let cgImage : CGImage = image.cgImage! 
    guard let pixelData = CGDataProvider(data: (cgImage.dataProvider?.data)!)?.data else { 
     return UIColor.clear 
    } 
    let data = CFDataGetBytePtr(pixelData)! 
    let x = Int(point.x) 
    let y = Int(point.y) 
    let index = Int(image.size.width) * y + x 
    let expectedLengthA = Int(image.size.width * image.size.height) 
    let expectedLengthRGB = 3 * expectedLengthA 
    let expectedLengthRGBA = 4 * expectedLengthA 
    let numBytes = CFDataGetLength(pixelData) 
    switch numBytes { 
    case expectedLengthA: 
     return UIColor(red: 0, green: 0, blue: 0, alpha: CGFloat(data[index])/255.0) 
    case expectedLengthRGB: 
     return UIColor(red: CGFloat(data[3*index])/255.0, green: CGFloat(data[3*index+1])/255.0, blue: CGFloat(data[3*index+2])/255.0, alpha: 1.0) 
    case expectedLengthRGBA: 
     return UIColor(red: CGFloat(data[4*index])/255.0, green: CGFloat(data[4*index+1])/255.0, blue: CGFloat(data[4*index+2])/255.0, alpha: CGFloat(data[4*index+3])/255.0) 
    default: 
     // unsupported format 
     return UIColor.clear 
    } 
} 
+0

問他人,因爲我不太確定。我想如果每個像素只有1個字節,它將是白色值,而不是alpha值。其他人能證實嗎? – BridgeTheGap 2017-03-28 03:39:37

+0

它可以是;你必須作出判斷呼叫。圖像可以是灰度圖像,在這種情況下,該值將是白色的,但也可以是透明度蒙版,在這種情況下,它將是alpha。我會說,透明度蒙版現在可能比灰度圖像更常見,所以使用alpha的決定是有道理的。就個人而言,我認爲這可以在特定情況下得到改進,因爲每次在迭代大量像素時測試像素時,執行所有代碼效率不高。 – Ash 2017-12-16 10:56:36

+0

n.b.您可以使用CGImage的'isMask'屬性來查看圖像是否是一個蒙版。 – Ash 2017-12-16 11:12:22