Tuesday, August 24, 2010

UIImage-Blur

Many moons ago, I wrote convolution kernel for Cocoa. It had convenience class functions to do many different types of filters, including blurring, embossing, outlining, edge detection, horizontal shift, LaPlacian, soften, high pass, etc. Now, this was before Core Image and long before the switch to Intel. I don't remember exactly when I first wrote it, but I'm guessing it was around 2001 or 2002. The method that actually applied the filter to an image used AltiVec if it was available, and if it wasn't, it did a brute force filter on the CPU.

Of course, once the switch to Intel happened, the AltiVec code was no longer was helpful, and then Apple came out with Core Image which includesda convolution kernel and all of the filter settings I had created and more. So, I stopped maintaining the code.

Then, when the iPhone came out and didn't have Accelerate or Core Image, I saw a potential use for the old source code. I had a project early on where I needed to be able to blur an image. So, I blew the dust off the old code. I didn't convert the entire convolution kernel – I didn't want to go through the effort if it wasn't going to work — so I created a blur category on UIImage. And it didn't work.

Pressed for time, I found another solution because I was uncertain the processor on the original iPhone would be able to apply a convolution kernel fast enough for my purposes, but I included the broken code when I released the source code for replicating the Urban Spoon effect. Today, I received an e-mail from a reader who figured out my rather boneheaded mistake. The convolution filter was fine, I just specified the wrong CGImage when specifying my provider while converting the byte data back to a CGImage.

Now, since I wrote that code, Apple has come out with the Accelerate framework for iOS, so it could definitely be made faster. It's unlikely that I will be able to spend the time converting it to use Accelerate unless I need a convolution kernel for my own client work; I've got too much on my plate right now to tackle it. If anyone's interested in doing the full convolution kernel port, you can check out the source code to Crimson FX. It's an old Cocoa project which may not work anymore, but it has, I believe, the last version of the convolution kernel before I gave up maintaining it. It shouldn't be hard to port the entire convolution kernel to iOS in the same manner. Once you get to the underlying byte data, the process is exactly 100% the same (even if byte order is different), and the code to convert to and from the byte data is this UIImage-Blur category.

So, without further ado, I've created a small scaffold project to hold the unaccelerated UIImage-Blur category. Have fun with it and let me know if you use it in anything interesting. If you improve it and would like to share your improvements, let me know, and I'll post it here.

You can find the source code right here. Here's a screenshot of the test scaffold with the original image and after several blurs have been applied. The image is from the Library of Congress Prints and Photographs Collection. Plattsburgh is where I grew up, so this public domain image struck my fancy. I don't know why, but the Army Base was spelled Plattsburg without the ending 'h' even though the city has always been Plattsburgh with the ending 'h'.

Screen shot 2010-08-24 at 10.18.58 AM.png

Thanks to Anthony Gonsalves for finding my error!



18 comments:

Howard Katz said...

Very nice, Jeff. Thank you. That'll give me something fun to play with and explore.

Something puzzles me tho. If you do a blur in the simulator, quit w/out resetting first, and then start up again in the simulator, you'll start with the previously blurred image. At first I thought you were explicitly persisting the image using NSUserDefaults, but that's not the case.

This is probably a doh-hit-myself-in-the-head kind of thing, but why does the blurred image get persisted if you quit and then start up again? I'm likely missing something very basic here.

I note that if you quit and restart in this way, the new, initial button setting becomes "Blur More" on the restart, not "Blur". And if you go ahead and press the "Blur More" button, you'll see garbage in the image.

Howard

Jeff LaMarche said...

Howard:

I think you're just observing the multitasking behavior in iOS4. Tapping the "home" button no longer quits an app automatically. If you double-tap the home button, the icon should be there showing that it's in the background, not executing, but still taking up memory and maintaining state. If you tap-and-hold the button then press the X to kill it's process and then run again, it should reset to the original image.

That's my guess, anyway.

art said...

Howard,

That sounds like iOS 4 app lifecycle stuff. When you hit the home button, the app is suspended instead of quit.

Howard Katz said...

I haven't had occasion to write to the new API's yet, but I suspect you guys are correct. That's very likely it. Thanks. That was a fun mystery for a short little while.

raheel said...

Although it runs fine in the Simulator, on the device it gives me an "EXC_BAD_ACCESS", with a trace that suggests that the CA::Render wasn't able to create the final image.

Sean said...

I haven't done any more than download the code and read it, but in the blur function it appears that there are two loops doing operations on the pixels of the whole image, with the final data being written one pixel at a time into p3, which points to something inside finalDest. However, after that second main loop, destData is passed to CGDataProviderCreateWithData. If I'm reading this right, it means that the results of the entire second loop through the image is thrown away and unnecessary.

Sean said...

Ok, so the bug I found just by scanning the code is probably exactly what you were talking about in your blog post there which I would have noticed if I read the english as carefully as I read the code. :) Did the bug accidentally make it into this zip or was that intentional? So confused! Clearly it is time for bed.

mahboud said...

I haven't had a chance to look at the code, but am curious if the blur routine is biased towards a square image, and therefore the blur result looks stretched?

Jeff LaMarche said...

Mahboud:

To be honest with you, I wrote this code almost a decade ago and then just converted it to UIImage. I don't remember much about it, since I haven't worked with convolution kernels in four or five years. I do know I'm using a square matrix, which may be the issue, but I honestly don't know for sure.

Sean:

See my comment to Mahboud - I really don't remember. Someday, I'd like to dive back in and re-familiarize myself, but I think there was a reason for two passes, but I honestly don't remember, it was so long ago. I don't for one second think that this code can't be improved :)

vyach said...

Hi Jeff. Thanks for the sample, but I'm having hard time downloading the source code - server just doesn't respond. Could you please fix the url?
http://www.innerloop.biz/code/Blur.zip

Thank you

SEO Services Consultants said...

Nice information, many thanks to the author. It is incomprehensible to me now, but in general, the usefulness and significance is overwhelming. Thanks again and good luck! Web Design Company

Roco said...

Hi Jeff, Thanks a lot for this.

I have one question though - the blurred effect seems to be on the vertical side (sorta like a motion blur on the y axis), when the radius is increased. Is there a way to do something a lot closer to a Gaussian blur?

But hey, thanks a lot this code is awesome. Cheers. :)

david.schiefer said...

it seems like this only works with smaller UIImages. If the image's resolution is larger - the application will crash with a BAD ACCESS error - it dies at something called CGSConvertABGR8888toRGBA8888.

It does however work with smaller, lower resolution images.

Robert said...

I am attempting to use this code in creating a blur effect on text (image) that is scrolling. The issue I am having is that in some of the images, the blur effect works great, in others I am getting noise (in the form of colored lines (usually in diagonals)). Any idea what could be causing this or a possible solution if someone else has experienced it.

Thanks in advance.

lws said...

I fixed the code to use the correct variables. Now it blurs correctly in 2D, not only vertically.

See http://pastie.org/1503679, lines 87, 89, 106 (and the comments there).

arckit said...

For those people who keep having bad access when using device try to comment the free(destdata) or the free(finaldata)

Avery said...

Nice work arckit, looks less like vertical motion blur and more like a nice smooth blur.

The Guv'nor said...

There is a bug in this when you run it on a device. If you free(finalData) it will crash. Don't just comment it out, you will start running out of memory!

You need to use the release callback of the dataprovider to free the memory by making a small change as follows.

Add a new function;

void providerRelease (void *info, const void *data, size_t size) { free((void *) data); }

Then make one change in;

CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL,finalData,bitmapByteCOunt,&providerRlease);

So replace the last NULL with a reference to the function that will handle the clear up.

Hope that helps!
Roger