Friday, March 26, 2010

Irregularly Shaped UIButtons

Note: There is an improved version of the code from this blog post right here.

You probably know that UIButton allows you to select an image or background image with alpha, and it will respect the alpha. For example, if I create four images that look like this:

Screen shot 2010-03-26 at 1.02.47 PM.png

I can then use create custom buttons in Interface Builder using these images, and whatever is behind the transparent parts of the button will show through (assuming the button is not marked opaque. However, UIButton's hit-testing doesn't take the transparency into account, which means if you overlap these buttons in Interface Builder so they look like this, for example:

Screen shot 2010-03-26 at 1.04.17 PM.png

If you click here:

pointat.png

The default hit-testing is going to result in the green diamond button getting pressed, not the blue one. While this might be what you want some of the time, typically this won't be the behavior want. So, how do you get it to work like that? It's actually pretty easy, you just need to subclass UIButton and override the hit testing method.

But, first, we need a way to determine if a given point on an image is transparent. Unfortunately, UIImage is an opaque type without a mechanism to give us easy access to the bitmap data the way NSBitmapRepresentation does for NSImages in Cocoa. But, every UIImage instance does have a property called CGImage that gives us access to the underlying image data, and Apple has very nicely published a tech note telling how to get access to the underlying bitmap data from a CGImageRef.

Using the information in that technote, we can easily craft a category on UIImage with a method that takes a CGPoint as an argument and returns either YES or NO depending on whether the alpha value that corresponds to that point is transparent (0).

UIImage-Alpha.h
#import <UIKit/UIKit.h>

@interface UIImage(Alpha)
- (NSData *)ARGBData;
- (BOOL)isPointTransparent:(CGPoint)point;
@end



UIImage-Alpha.m
CGContextRef CreateARGBBitmapContext (CGImageRef inImage)
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;


size_t pixelsWide = CGImageGetWidth(inImage);
size_t pixelsHigh = CGImageGetHeight(inImage);
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);

colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL)
return nil;

bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
CGColorSpaceRelease( colorSpace );
return nil;
}

context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8,
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}

CGColorSpaceRelease( colorSpace );

return context;
}


@implementation UIImage(Alpha)
- (NSData *)ARGBData
{
CGContextRef cgctx = CreateARGBBitmapContext(self.CGImage);
if (cgctx == NULL)
return nil;

size_t w = CGImageGetWidth(self.CGImage);
size_t h = CGImageGetHeight(self.CGImage);
CGRect rect = {{0,0},{w,h}};
CGContextDrawImage(cgctx, rect, self.CGImage);

void *data = CGBitmapContextGetData (cgctx);
CGContextRelease(cgctx);
if (!data)
return nil;

size_t dataSize = 4 * w * h; // ARGB = 4 8-bit components
return [NSData dataWithBytes:data length:dataSize];
}

- (BOOL)isPointTransparent:(CGPoint)point
{
NSData *rawData = [self ARGBData]; // See about caching this
if (rawData == nil)
return NO;

size_t bpp = 4;
size_t bpr = self.size.width * 4;

NSUInteger index = point.x * bpp + (point.y * bpr);
char *rawDataBytes = (char *)[rawData bytes];

return rawDataBytes[index] == 0;

}

@end

Once we have the ability to tell if a particular point on an image is transparent, we can then create our own subclass of UIButton and override the hitTest:withEvent: method to do a slightly more sophisticated hit test than UIButton's. The way this works is that we need to return an instance of UIView. If the point is not a hit on this view or one of its subclasses, we return nil.. If it's a hit on a subview, we return the subview that was hit, and if it's a hit on this view, we return self.

However, we can simplify this a little because, although UIButton, inherits from UIView and can technically have subviews, it is exceedingly uncommon to do so and, in fact, Interface Builder won't allow it. So, we don't have to worry about subviews in our implementation unless we're doing something really unusual. Here's a simple subclass of UIButton that does hit-testing based on the alpha channel of the image or background image of the button, but assumes there are no subviews.

IrregularShapedButton.h
#import <UIKit/UIKit.h>

@interface IrregularShapedButton : UIButton {

}


@end



IrregularShapedButton.m
#import "IrregularShapedButton.h"
#import "UIImage-Alpha.h"

@implementation IrregularShapedButton

- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event
{
if (!CGRectContainsPoint([self bounds], point))
return nil;
else
{
UIImage *displayedImage = [self imageForState:[self state]];
if (displayedImage == nil) // No image found, try for background image
displayedImage = [self backgroundImageForState:[self state]];
if (displayedImage == nil) // No image could be found, fall back to
return self;
BOOL isTransparent = [displayedImage isPointTransparent:point];
if (isTransparent)
return nil;

}


return self;
}

@end


If we change the class of the four image buttons in Interface Builder from UIImage to IrregularShapedButton, they will work as expected. You can try the code out by downloading the Xcode project. Improvements and bug-fixes are welcome.

Curiously, the documentation for hitTest:withEvent: in UIView says This method ignores views that are hidden, that have disabled user interaction, or have an alpha level less than 0.1.. In my testing, this is actually not true, though I am unsure whether it's a documentation bug or an implementation bug.


Update: My Google-Fu failed me. I did search for existing implementations and tutorials about this subject before I wrote the posting (I hate reinventing the wheel), but I failed to find Ole Begemann's implementation of this from a few months ago. It's worth checking out his implementation to see different approaches to solving the same problem. There's also some discussion in the comments about the differences in our implementations that may be of interest if you like knowing the nitty-gritty details. Plus, his diamonds are prettier than mine.

Update 2: Alfons Hoogervorst tweaked the code andshowed how you could reduce the overhead by creating an alpha-only context.



11 comments:

O.B. said...

Instead of overriding hitTest:withEvent:, it would be cleaner to override pointInside:withEvent:. The standard implementation of hitTest:withEvent: traverses the view hierarchy and sends all subviews a pointInside:withEvent: message. The result will be the same, but I think this approach would be more in line with the design of the framework.

Jeff LaMarche said...

O.B.:

That's an interesting thought. My concern with overriding pointInside:withEvent: is that Apple hasn't documented everything it's used for. I wasn't confident from the method description that it was only used for hit testing.

You might very well be right, but the law of unintended consequences concerns me and has bitten me many times. The method hitTest:withEvent: is designed and documented as being specifically for checking if a touch hits a view and it seems the logical place to put it.

You're right that the default implementation does traverse subviews. But it seemed silly to to spend time writing code to traverse subviews of a view that 99.99% of the time won't actually contain subviews and to which IB won't allow subviews to be added.

The code in this UIButton subclass can't affect hit testing outside itself, so I think your concerns are academic, but I'd be curious to know how it goes if you choose to try using pointInside:withEvent: instead.

Ole Begemann said...

(I'm O.B. from above, sorry that my name didn't get through the first time.)

Jeff: I see. I actually wrote a very similar class (OBShapedButton) a few months ago and used pointInside:withEvent:. So far I have had absolutely no problems with it but I can't say it has been used extensively.

Jeff LaMarche said...

Excellent. I've linked to your blog post and github repository in the main body of the post so people can see both implementations. I did do a search for existing implementations of this idea, but guess I'm not as good with Google as I thought. Your approach may be better (pointInside:withEvent: is documented with a "see also" to hitTest:withEvent:). I'll have to download it and check it out.

damiangriffin said...

Leaking Memory!!!

I found this piece of code to be incredibly useful as I have adapted to check to see if there are any pixels remaining in an image I am drawing onto. Could have never figured it out without this. The only problem I have is that the code as it stands leaks memory pretty bad.

It probably doesn't show up much with one touch or button press, but repeated taps (As in my app where I am testing to see if there are any pixels remaining in the contextRef) really show up. You can see the leak pretty well by using the memory monitor in Instruments.

I think part of the problem is the deallocation of the bitmapData. I don't think I am wise enough to figure out where this is supposed to be done. I know that I can slow the leak by using:

return [NSData dataWithBytesNoCopy:data length:dataSize freeWhenDone:YES];

instead of:

return [NSData dataWithBytes:data length:dataSize];

The weird thing is when I use NSData dataWithBytesNoCopy I lose access to the buffer in the iPhone Simulator although It works OK in on the actual device. I think that may be an unfortunate bug with dataWithBytes or something.

I think there must be a better way of making sure that the original void * bitmapData that is malloc'd is released.

Any thoughts?

Thanks!

Ella said...

Hi,

How can I get this working when by buttons are code generated?

So far I have:
OBShapedButton *mybtn = [[OBShapedButton buttonWithType:UIButtonTypeCustom] retain];

mybtn.frame = CGRectMake(0.0, 0.0,screenHeight, screenWidth); //position the button and give it a width and height

UIImage *btnimg = [UIImage imageNamed:@"btnimg.png"];

[mybtn setBackgroundImage:btnimg forState:UIControlStateNormal];

self.view addSubview:mybtn]; //display the button

It all compiles and the button shows up fine, but it still sees itself as clicked when I click the transparent area within the png.

I don’t know if it matters, but I’m developing for iPad.

I’m quite new to iphone dev so sorry if this is a stupid question!

Comptrol said...

@Ella, you need to learn objective-c to be able to develop for ios. There is no need to retain on the following line
[[OBShapedButton buttonWithType:UIButtonTypeCustom] retain];

Have a look at the kochan's book. Strongly recommended

Gustav said...

@Ella: ... and its CGRectMake(x, y, width, height).

Gustav said...
This comment has been removed by the author.
h4ns said...

What youre saying is completely true. I know that everybody must say the same thing, but I just think that you put it in a way that everyone can understand. I also love the images you put in here. They fit so well with what youre trying to say. Im sure youll reach so many people with what youve got to say.

Arsenal vs Huddersfield Town live streaming
Arsenal vs Huddersfield Town live streaming
Wolverhampton Wanderers vs Stoke City Live Streaming
Wolverhampton Wanderers vs Stoke City Live Streaming
Notts County vs Manchester City Live Streaming
Notts County vs Manchester City Live Streaming
Bologna vs AS Roma Live Streaming
Bologna vs AS Roma Live Streaming
Juventus vs Udinese Live Streaming
Juventus vs Udinese Live Streaming
Napoli vs Sampdoria Live Streaming
Napoli vs Sampdoria Live Streaming
Fulham vs Tottenham Hotspur Live Streaming
Fulham vs Tottenham Hotspur Live Streaming
AS Monaco vs Marseille Live Streaming
AS Monaco vs Marseille Live Streaming
Alajuelense vs Perez Zeledon Live Streaming
Alajuelense vs Perez Zeledon Live Streaming
Technology News | News Today | Live Streaming TV Channels

Jimmy Park said...

I am trying to detect transparency area on sprite with cocos2d. I'm using your UIImage-Alpha.m and UIImage-Alpha.h files with this code.

###Figures.h###
@interface Figures : CCLayer
{
CCSprite *image;
}



###Figures.m###
-(BOOL) ccTouchBegan:(UITouch *)touch withEvent:(UIEvent *)event
{
CGPoint touchLocation = [Singleton locationFromTouch:touch];

// Check if this touch is on the sprite.
BOOL isTouchHandled = CGRectContainsPoint([image boundingBox], touchLocation);

if(isTouchHandled)
{
UIImage *displayedImage = [self convertSpriteToImage:image];

BOOL isTransparent = [displayedImage isPointTransparent:touchLocation];

if (isTransparent)
{
NSLog(@"isTransparent");
}
else
{
NSLog(@"not Transparent");
}

[self colorChanger];
}

return isTouchHandled;
}



-(UIImage *) convertSpriteToImage:(CCSprite *)sprite
{
CGPoint p = sprite.anchorPoint;
[sprite setAnchorPoint:ccp(0,0)];

CCRenderTexture *renderer = [CCRenderTexture renderTextureWithWidth:sprite.contentSize.width
height:sprite.contentSize.height];

[renderer begin];
[sprite visit];
[renderer end];

[sprite setAnchorPoint:p];

return [UIImage imageWithData:[renderer getUIImageAsDataFromBuffer:kCCImageFormatPNG]];
}


but this code cannot recognize transparency area. Could you help me?