的可見部分我在應用程序中有UIImageView
和我附上一個手勢識別到該圖像視圖。如何調用方法上只點擊animage視圖
我的問題是,我只需要觸發其可見區域時才需要調用其操作,而不是在其透明部分出現觸摸時。
我怎樣才能達致這?
的可見部分我在應用程序中有UIImageView
和我附上一個手勢識別到該圖像視圖。如何調用方法上只點擊animage視圖
我的問題是,我只需要觸發其可見區域時才需要調用其操作,而不是在其透明部分出現觸摸時。
我怎樣才能達致這?
的抵達ALPHA看一看github上的OBSHAPED按鈕。它能夠解決您的問題編碼愉快。:)
這些代碼段應該讓你開始。 這第一件是UIView
類別(即我某處這樣,但我不記得在哪裏)。 This gets you the color under the touch point.
@interface UIView (ColorOfPoint)
- (UIColor *) colorOfPoint:(CGPoint)point;
@end
- (UIColor *) colorOfPoint:(CGPoint)point
{
unsigned char pixel[4] = {0};
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pixel, 1, 1, 8, 4, colorSpace, kCGImageAlphaPremultipliedLast);
CGContextTranslateCTM(context, -point.x, -point.y);
[self.layer renderInContext:context];
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
UIColor *color = [UIColor colorWithRed:pixel[0]/255.0 green:pixel[1]/255.0 blue:pixel[2]/255.0 alpha:pixel[3]/255.0];
return color;
}
這第二件顯示瞭如何調用自定義圖像視圖類此方法:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [[event allTouches] anyObject];
CGPoint loc = [touch locationInView:self];
self.pickedColor = [self colorOfPoint:loc];
[[NSNotificationCenter defaultCenter] postNotificationName:@"ColorPicked" object:self userInfo:nil];
}
我用了一個通知,在這裏,因爲我需要讓其他類知道的價值挑選顏色。
非常感謝!它的工作原理。 – alok 2012-08-08 07:30:13
U可以用這個觸摸點
-(UIColor*) getPixelColorAtLocation:(CGPoint)point { CGImageRef inImage; UIColor *color = nil; inImage = [self.imageView.image CGImage]; CGContextRef context = [self createARGBBitmapContextFromImage:inImage]; if(context == NULL) return nil; size_t w = CGImageGetWidth(inImage); size_t h = CGImageGetHeight(inImage); CGRect rect = {{0,0},{w,h}}; // Draw the image to the bitmap context. Once we draw, the memory // allocated for the context for rendering will then contain the // raw image data in the specified color space. CGContextDrawImage(context, rect, inImage); // Now we can get a pointer to the image data associated with the bitmap // context. unsigned char* data = CGBitmapContextGetData (context); if (data != NULL) { //offset locates the pixel in the data from x,y. //4 for 4 bytes of data per pixel, w is width of one row of data. int offset; offset = 4*((w*round(point.y))+round(point.x)); int alpha = data[offset]; int red = data[offset+1]; int green = data[offset+2]; int blue = data[offset+3]; //NSLog(@"%@",name); NSLog(@"offset: %i colors: RGB A %i %i %i %i ",offset,red,green,blue,alpha); color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha/255.0f)]; // When finished, release the context CGContextRelease(context); // Free image data memory for the context if (data) { free(data); } return color; } -(CGContextRef) createARGBBitmapContextFromImage:(CGImageRef) inImage { CGContextRef context = NULL; CGColorSpaceRef colorSpace; void * bitmapData; int bitmapByteCount; int bitmapBytesPerRow; // Get image width, height. We'll use the entire image. size_t pixelsWide = CGImageGetWidth(inImage); size_t pixelsHigh = CGImageGetHeight(inImage); // Declare the number of bytes per row. Each pixel in the bitmap in this // example is represented by 4 bytes; 8 bits each of red, green, blue, and // alpha. bitmapBytesPerRow = (pixelsWide * 4); bitmapByteCount = (bitmapBytesPerRow * pixelsHigh); // Use the generic RGB color space. colorSpace = CGColorSpaceCreateDeviceRGB();//CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB); if (colorSpace == NULL) { fprintf(stderr, "Error allocating color space\n"); return NULL; } // Allocate memory for image data. This is the destination in memory // where any drawing to the bitmap context will be rendered. bitmapData = malloc(bitmapByteCount); if (bitmapData == NULL) { fprintf (stderr, "Memory not allocated!"); CGColorSpaceRelease(colorSpace); return NULL; } // Create the bitmap context. We want pre-multiplied ARGB, 8-bits // per component. Regardless of what the source image format is // (CMYK, Grayscale, and so on) it will be converted over to the format // specified here by CGBitmapContextCreate. context = CGBitmapContextCreate (bitmapData, pixelsWide, pixelsHigh, 8, // bits per component bitmapBytesPerRow, colorSpace, kCGImageAlphaPremultipliedFirst); if (context == NULL) { free (bitmapData); fprintf (stderr, "Context not created!"); } // Make sure and release colorspace before returning CGColorSpaceRelease(colorSpace); return context; }
如果你只需要觸摸並調用觸摸處理器使用視圖層次即使用的UIView作爲容器,並把作爲的UIImageView子視圖它,使框架,如果UIImageView的圖像的尺寸和應用手勢識別到的UIImageView只:) – 2012-08-08 05:59:18