我寫了下面的代碼到懷舊濾鏡應用於圖像:目標C提高CIImage過濾速度
- (void)applySepiaFilter {
// Set previous image
NSData *buffer = [NSKeyedArchiver archivedDataWithRootObject: self.mainImage.image];
[_images push:[NSKeyedUnarchiver unarchiveObjectWithData: buffer]];
UIImage* u = self.mainImage.image;
CIImage *image = [[CIImage alloc] initWithCGImage:u.CGImage];
CIFilter *filter = [CIFilter filterWithName:@"CISepiaTone"
keysAndValues: kCIInputImageKey, image,
@"inputIntensity", @0.8, nil];
CIImage *outputImage = [filter outputImage];
self.mainImage.image = [self imageFromCIImage:outputImage];
}
- (UIImage *)imageFromCIImage:(CIImage *)ciImage {
CIContext *ciContext = [CIContext contextWithOptions:nil];
CGImageRef cgImage = [ciContext createCGImage:ciImage fromRect:[ciImage extent]];
UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
return image;
}
當我運行這段代碼似乎落後1-2秒。我聽說核心圖像比核心圖形更快,但我對渲染時間無動於衷。我想知道這是否會在CoreGraphics甚至OpenCV(在項目中的其他地方使用)中更快地處理?如果沒有,我有什麼辦法可以優化這段代碼以更快運行?
你用儀器找出這裏真的很慢嗎? – zneak
@zneak這些工具是什麼? –
掉下我的腦袋,像「時間分析器」。 – zneak