Tuesday, March 31, 2009

Speed with a Catch

A while back, I wrote a post about surface normals in OpenGL ES. Yesterday on Twitter, there was some discussion about using the inverse square root function from Quake 3 to speed up the performance of iPhone OpenGL ES applications. Here is what that method looks like (converted to using GL and iPhone data types):

static inline GLfloat InvSqrt(GLfloat x)
{
GLfloat xhalf = 0.5f * x;
int i = *(int*)&x; // store floating-point bits in integer
i = 0x5f3759d5 - (i >> 1); // initial guess for Newton's method
x = *(GLfloat*)&i; // convert new bits into float
x = x*(1.5f - xhalf*x*x); // One round of Newton's method
return x;
}

The inverse square root can be used in several ways. Noel Llopis of Snappy Touch pointed out two uses for it on Twitter yesterday: calculating normals and doing spherical UV texture mapping. I'm still trying to wrap my head around the UV Texture Mapping, but I understand normals pretty well at this point, so I though I'd see what kind of performance gains I could get using this old optimization. There's all sorts of arguments around the intertubes about whether this function still gives performance gains, but there's an easy way to find out: use it and measure with Shark.

I used my Wavefront OBJ Loader as a test, and profiled the loading of the most complex of the three objects - the airplane. The first run was using my original code, which stupidly1 used sqrt(). I then re-ran it using sqrtf(), and then again using the Quake3D InvSqrt() function above.

The results were impressive, and you definitely do get a performance increase from using this decade-old function on the iPhone. Using InvSqrt() gave a 15% decrease in time spent calculating surface normals over using sqrtf() and a 40% decrease over calculating with sqrt(). That's not an amount to be sneezed at, especially in situations where you need to calculate normals on the fly many times a second.

Now, if you remember, this was how we calculated normals using the square root function from Math.h:

static inline GLfloat Vector3DMagnitude(Vector3D vector)
{
return sqrt((vector.x * vector.x) + (vector.y * vector.y) + (vector.z * vector.z));
}

static inline void Vector3DNormalize(Vector3D *vector)
{
GLfloat vecMag = Vector3DMagnitude(*vector);
if ( vecMag == 0.0 )
{
vector->x = 1.0;
vector->y = 0.0;
vector->z = 0.0;
}

vector->x /= vecMag;
vector->y /= vecMag;
vector->z /= vecMag;
}

So... how can we tweak this to use inverse square root? Well, the inverse square root of a number is simply 1 divided by the square root of that number. In Vector3DNormalize(), we divide each of the components of the vector (x,y,and z) by the magnitude of the vector, which is calculated using square root. Since dividing a value by a number is the same as multiplying by 1 divided by that same number, so, we can just multiply each component by the inverse magnitude instead, like so:

static inline GLfloat Vector3DFastInverseMagnitude(Vector3D vector)
{
return InvSqrt((vector.x * vector.x) + (vector.y * vector.y) + (vector.z * vector.z));
}

static inline void Vector3DFastNormalize(Vector3D *vector)
{
GLfloat vecInverseMag = Vector3DFastInverseMagnitude(*vector);
if (vecInverseMag == 0.0)
{
vector->x = 1.0;
vector->y = 0.0;
vector->z = 0.0;
}

vector->x *= vecInverseMag;
vector->y *= vecInverseMag;
vector->z *= vecInverseMag;
}


Sweet, right? If we now use Vector3DFastNormalize() instead of Vector3DNormalize(), and each call will be about 15% faster on current generations of the iPhone and iPod Touch compared to using the built-in square root function.

But… there's a catch. Actually, two catches.

The Catches


The first catch is that this optimization doesn't work faster on all hardware. In fact, on some hardware, it is measurably slower than using sqrtf(). That means you're gambling that future hardware will also benefit from this same optimization. Not a huge deal and very possibly a safe bet, but you should be aware of it, and be prepared to back it out quickly should Apple release a new generation of iPhones and iPod Touches that use a different processor.

The second, and far more important catch is the possible legal ramifications of using this code. You see, Id released Quake3D's source code under the GNU Public License, which is a viral license. If you use source code from a GPL project, you have to open source your entire project under the GPL as well. Now, that's an oversimplification, and there are ways around the GPL, but as a general rule, if you use GPL'd code, you have to make your code GPL also.

But, the waters are a little murky. John Carmack has admitted that he didn't write that function, and doesn't think the other programmers at Id did either. The actual author of the code is unknown. Some of the contributors to the function have been found, but not the original author. That means the code MIGHT be in the public domain. If that's the case, its inclusion in a GPL application doesn't take it out of the public domain.

So, bottom line: is it safe to use? Probably. This function is widely known and widely used and there's been no indication that any possible rights owner has any interest in chasing down every use of this function. Are there any guarantees? Nope.

My recommendation is to use it, but make sure every place you use it, have a backup method that you can fallback on if you need to. If you want some assurance, you could try contacting Id legal and getting a waiver to use that function. I don't know if they'll respond, or if they'll grant it, but the folks at Id have always struck me as good people, so it might be worth an inquiry if you're risk averse.

1 - sqrt() is a double-precision function. Since OpenGL ES doesn't support the GLDouble datatype, which means I was doing the calculation on twice as many bits as needed, and converting back and forth from single to double precision then back again.



Apple Packaging

My new laptop arrived. As usual, I'm impressed with Apple's machines and with their packaging. I normally wouldn't have bought a new computer for another six months or so, but I've been toying with the idea of getting a new one since the 17" MacBook Pro was announced at Macworld. It's a beautiful machine. I drooled over the few samples they had on the show floor and have had a severe case of technolust since then. The longer battery life is a great feature for me and I've been incredibly impressed with the unibody MacBook I bought for my wife a few months ago. I found out recently that Dave and I had actually earned some royalties in the fourth quarter of 2008. I decided to write the book with Dave with the expectation that we probably would never make more than the advance. The fact that we earned out the advance and sold enough to get additional royalties in the first six weeks it was on sale seemed like a good justification for buying a new machine ahead of my normal schedule. I ordered it Thursday. It left China on Saturday, and it arrived on my doorstep at around 10:00 this morning.

This generation of unibody aluminum computers is nothing short of amazing. They feel solid and well-built like no other laptop I've ever picked up, yet are fairly light and thin. They seem to run considerably cooler then previous generations, the graphics chips are much better (and this machine has two of them!), the screens are ever so much brighter, and the keyboard feels really nice. That last one, I was really unsure about - I didn't think I would like the "chicklet" style keyboard , but I do. Very much so.

Here's the thing that impressed me the most, though.



From left to right, we have the box for the new 17" MacBook Pro, the previous generation 17" MacBook Pro, and the first generation 17" MacBook Pro. This would be even more dramatic if I had bothered to run downstairs to get the 17" PowerBook box I still have in the basement, because it's considerably bigger even then these. Every generation, Apple figures out a way to make the boxes smaller yet still protect the machines during transport. For the record, all of these machines are approximately the same size. There are minor differences in the footprint, but not enough to justify a noticeable difference in packaging.

As you can see from this picture;



There's not much in the way of wasted space. And even the outer box didn't have a lot of extra room - here it is nested in the shipping box:



I don't see that they could save much more room and still ship it safely. Visually, the changes are somewhat subtle, but on closer inspection, they are two completely different machines. Here's a picture of the new machine transferring files from my Time Machine backup. There was a window behind me in this shot and it's a sunny day, so there was a LOT of light on the machine, yet the screen is still bright enough to wash out in the picture.





Monday, March 30, 2009

WWDC First Time Guide

Simon Wolf suggested via Twitter that I write up some info for first timers to WWDC. There are plenty of people who have been going longer. Heck, I can think of a few people I follow on Twitter who have been going since it was held in San Jose, and at least one who remembers when it was just called the Apple Spring Developers Conference. The event changes from year to year, and the only constants seem to be that the event gets bigger, and the bags get worse.

So, I don't want to pass myself off as an expert here, at least relative to several others I can think of. But I can think of a few pointers that may help first-timers.
  1. Do not lose your badge. If you lose it, you are done. You will spend your time crying on the short steps in front of Moscone West while you watch everyone else go in to get edumacated. Sure, you'll still be able to attend the after-hours and unofficial goings-on (except the Thursday night party, which is usually a blast), but you'll miss out on the really important stuff. No amount of begging or pleading will get you a replacement badge, and since they're likely to sell out, no amount of money will get you another one, either. And that would suck. Treat it like gold. When I'm not in Moscone West or somewhere else where I need the badge, I put it in my backpack, clipped to my backpack's keyper (the little hook designed to hold your keys so they don't get lost in the bottom of your bag).

  2. Eat your fill. They will feed you two meals a day, you're on your own for dinner. Breakfast starts a half-hour before the first session, and it's probably going to be a continental breakfast - fruit, pastries, juice, coffee, donuts, toast, and those round dinner rolls that Californians think are bagels, but really aren't. If you're diabetic or need to eat gluten-free, you probably want to eat before-hand. Lunch used to be (IIRC) a hot lunch, but last year they were boxed lunches. They were pretty good as far as boxed lunches go, but they were boxed lunches. I know a lot of people choose to go to a nearby restaurant during the lunch break, which is pretty long - at least 90 minutes.

  3. Party hard (not that you have a choice). There are lots of official and unofficial events in the evening. There's usually a CocoaHeads meeting at the Apple Store. It fills up crazy fast, so go early if you go. It's usually on Tuesday, and it's usually competing with several other parties, but it starts earlier than most events and finishes early enough for people to go to other parties when it's done. Best bet is to follow as many iPhone and Mac devs on Twitter that you can - the unofficial gatherings happen at various places downtown, often starting with a few "seed crystal" developers stopping for a drink and tweeting their whereabouts. The unofficial, spontaneous gatherings can be really fun and a great opportunity. The parties often start before WWDC - there are usually a few on Sunday, and there have been ones as early as Saturday before. The Harlot at 111 Minna is a common place for parties, as are Jillians in the Metreon, and the Thirsty Bear on Howard. There are other common spots that escape me right now, but as we get closer, there will be lists and calendars devoted to all the events and parties. Some are invite-only, but many are first-come, first-serve. Although there's a lot of drinking going on, these are worth attending even if you don't drink. Great people, great conversations... completely good times.

  4. Take good notes. You are going to be drinking knowledge from a firehose there. The information will come at you fast and furious. As an attendee, you will get all the session videos on ADC on iTunes, but it takes months and months before they become available, so the things you need to know now, write down.

  5. Labs rule. If you're having a problem, find an appropriate lab. One of the concierges at any of the labs can tell you exactly which teams and/or which Apple employees will be at which labs when. If you're having an audio problem, you can easily stalk the Core Audio team until they beat the information into your skull, for example (that example is from personal experience - those guys are awesome, by the way). It's unstructured, hands-on time with the people who write the frameworks and applications. People start remembering the labs later in the week it seems, but early on, you can often get an engineer all to yourself.

  6. Buddy up, divide and conquer There will be at least a few times when you want to be at more than one presentation at the same time. Find someone who's attending one and go to the other (Twitter is a good way to find people), then share your notes.

  7. Make sure to sleep on the plane. You won't get many other chances once you get there. Everybody is ragged by Friday, some of us even earlier. Everyone remains surprisingly polite given how sleep-deprived and/or hungover people are.

  8. Thanks your hosts. The folks at Apple - the engineers and evangelists who give the presentations and staff the labs, kill themselves for months to make WWDC such a great event. So, do your mother proud and remember your manners. Say thank you when someone helps you, or even if they don't. And if you see one of them at an after hours event, it's quite alright to buy them a beer to say thanks.

  9. Remember you're under NDA. This one is hard, especially for me. We see so much exciting amazing stuff that week that it's natural to want to tweet it, blog it, or even tell the guy handing out advertisements for strip joints on the corner all about it. Don't. Everything, from morning to night except the Keynote and the Thursday night party are under NDA.

  10. Brown Bag it. Most days there are "brown bag" sessions. These are speakers not from Apple who give entertaining, enlightening, or inspiring talks at lunchtime. Unfortunately, my favorite brown bag session isn't happening this year, which is the presentation by Dr. Michael "Wave" Johnson, head of the Moving Pictures Group at Pixar. Despite that, check the schedule, some of them are bound to be well worth your time.

  11. Monday, Monday I don't know what to say about Monday. Last year, people started lining up at midnight the night before. I was still on East coast time, so for grins and giggles (since I was up anyway), I walked over at 4:15 to see if anyone was in line, not expecting to find more than a couple of insane people. I found a several hundred insane people, so I stayed and became an insane person myself. By 6:00am (when the line used to start forming), the line was five-wide and went around the corner. By the time they let us into the building at around 7:00, many of us had to pee awfully bad. They wound us around the first floor, then up the escalators and around the second floor, letting us go a little further every once in a while until we were about a hundred feet from the escalators going up to the third floor.

    Personally, I'm not sure I want to get up quite as early this year, but I did get to talk to a lot of very cool people last year while waiting in line, and there is a sense of camaraderie that develops when you do something silly with other people like that. Some people probably want me to suggest what time to get in line. I have no idea. Most people will get into the main room to see the Keynote. There may be some people diverted to an overflow room, but because the number of attendees is relatively low and the Presidio (the keynote room) is so big, it's a tiny percentage who have to go to the overflow rooms (maybe the last 1,000 worst case scenario). On the other hand, you'll actually get a better view in the overflow rooms unless you get there crazy early - you'll get to watch it in real time on huge screens and you'll get to see what's happening better than the people at the back of the Presidio. So, go when you want to. If you want to get up early and go be one of the "crazy ones", cool! If you want to get up later, you'll still get to see the keynote sitting in a comfy room with other geeks. And no, I have no idea if Steve is returning for the keynote

  12. Park it once in a while There will be time between sessions, and maybe even one or two slots that have nothing you're interested in. Or, you might find yourself too tired to take in the inner workings of the Shark performance tool. In that case, there are several lounges around where you can crash in a bean bag chair, comfy chair, or moderately-comfy chair. There is wi-fi throughout the building and wired connections and outlets in various spots on all floors. So, find a spot, tweet your location, and zone out for a little while or do some coding. You never know who you might end up talking with. If you move around too much, well, let's just say a moving target is harder to hit than a stationary one.


Have more suggestions for first-timers? Let me know and I'll add them.

Update: Check the comments for more great tips. One in particular I wanted to highlight - make sure you register on Sunday. Registration won't open on Monday until long after most people have gotten in line. Registration is usually open until 4:00pm, so try and get over there to pick up your badge, t-shirt, and bag so you'll be ready whatever time you decide to get in line.



WWDC Accommodations

Staying downtown in San Francisco is very expensive in the summertime. Bu, if you're going to WWDC, you really want to stay downtown. You do not want to be taking the BART in if you can help it, and you really don't want to be driving and and looking for parking.

With the closest chain hotels — the full service Marriott and the W — averaging around a $400 daily rack rate, and the Courtyard on 2nd (about four blocks away) costing well over $200 a night, it can be hard to find a place to stay that won't knock the hell out of your travel budget.

Here are a couple of hotels not run by Marriott, Hilton, or Starwood chains that are reasonable and seem to be well reviewed. They are both within a few blocks of Moscone West:

The Hotel Palomar is a well-reviewed Kimpton hotel that's very close to Moscone West. The standard room rate is well under $200, and the AAA discount brings it well under $150 (on average, some nights may cost a little more.

The Powell Hotel, run by the Miramir Hospitality Group is right across Market, probably three blocks from Moscone, and has an average nightly rate of $144 during the week of WWDC.

If anyone has any other accommodation recommendations for WWDC, post them in the comments and I will add them here.

Note: I have not stayed at any of these hotels myself (although I have a reservation at the Palomar for this year), so do your due diligence before making a reservation. I cannot personally guarantee you that they will have every feature that you wish or that they will meet a certain level of cleanliness, I'm just trying to provide some more reasonable alternatives to the big chains within walking distance to Moscone. My personal experience on this matter is very limited; in the past, I have always stayed at the Courtyard on 2nd or the Marriott on 4th because I always had elite status with Marriott from all the travel I used to do, so I was able to stay on points.

Update 1: According to Julio Barros, Expedia had good prices on the Westin and the W last week (I just checked the Westin, and they had it for $174 a night) so it's worth checking them out.

Update 2 Brian Gorby says the Hotel Triton on Grant Ave. is "clean, comfy, and friendly", though you're looking at about an eight-block walk. Not a killer, but enough to discourage you from going back to your hotel during the day. Though, that may be a good thing.

Update 3 Bill says the Intercontintental on Howard runs about $180-$210 and is only a block or two from Moscone and Toby Joe says he's staying at the Clift hotel at 495 Geary (about 6 blocks away) for about $200.

Update 4 Mike Taylor pointed out via Twitter that the Villa Florence was renovated last year and he was able to get a room for only $119.

No more updates - comments are growing too fast - just read the comments are loaded with good suggestions. Maybe I'll consolidate all the suggestions after they've slowed down.



Wavefront OBJ Loader Open Sourced to Google Code

I have made a minor update to the Wavefront OBJ Loader and released it on Google Code. You can find its new homepage right here.

The UV texture mapping is still wonky - I haven't had time to look at that. After profiling, I realized I could save quite a bit of time by using sqrtf() instead of sqrt(), since OpenGL ES doesn't support GLDouble, there was no point in using the higher precision square root function.

I also implemented, based on some tweets by Noel Llopis of Snappy Touch fame, a faster normalization function that utilizes the fast inverse square root optimization. This is an optional optimization based on a pre-compiler define.

I really don't have any immediate plans to do much with this, but if anyone wants to work on it, I'm happy to add you as a project member. If you're interested in loading 3D objects, there's a fair amount of useful code in this project you can borrow and learn from, but there's also plenty of room for further optimizations if you feel like trying out Shark.



Sunday, March 29, 2009

Apple Store LA Book Sighting

Here's a picture sent to Dave by Dave Wooldridge of Electric Butterfly. It's our book on the shelves of the Apple Store in Los Angeles at The Grove.



Someday, I hope to see one of these in person. If not before June, I should be able to see it on the shelves of the Apple Store in San Francisco during the week of WWDC.

Unfortunately, the closest Apple Store to my home is an hour away, and it's not one of the top fifty stores carrying the book now.



Differences in Delegation

Cocoa and Cocoa Touch obviously have a lot in common. They use the same underlying language, they both utilize Foundation classes, and they both follow many of the same design patterns. The fact that they are designed to work for different kinds of physical hardware and the fact that Cocoa Touch was created nearly twenty years after Cocoa (née NexSTEP), means that there are some areas that are very different. These differences can throw you, since so much between the two is the same.

The most obvious of these differences is that the concept of generic view controller classes is "baked in" to Cocoa Touch, but were added to Cocoa after it had been around for years. But that's a topic for a separate blog posting. Today, I want to talk about another, slightly more subtle difference, which is that Cocoa and Cocoa Touch implement delegates in completely different ways. Let's look at the most commonly used delegate objects: the application delegates. The two frameworks' application delegates—UIApplicationDelegate and NSApplicationDelegate— serve the same purpose, but are implemented differently.

Delegation is not unique to Objective-C. It's a recognized pattern that is used in many languages, albeit sparingly in most. Because of Objective-C's dynamic, loosely typed nature, the Apple and NeXT engineers realized early on that delegation was often a better choice than inheritance, which is why the class hierarchy for Cocoa and Cocoa Touch is generally flatter than the hierarchies of object-oriented application frameworks built in other languages. If you have come to Objective-C recently from another OO language, whenever your first impulse is to subclass, take a step back and ask yourself if another design pattern, like delegation or a category, doesn't fit better in light of the language you are using.

Anyway, back to the application delegates. The big difference between UIApplicationDelegate and NSApplicationDelegate is in what they are. NSApplicationDelegate is an informal protocol, which means that it's simply a category on NSObject. Below, you can see what NSApplicationDelegate looks like in Leopard. I have taken out comments and pre-compiler macros and reformatted it to make it easier to read. You can find the original in <Cocoa/NSApplication.h>:

@interface NSObject(NSApplicationDelegate)
- (NSApplicationTerminateReply)applicationShouldTerminate:(NSApplication *)sender;
- (BOOL)application:(NSApplication *)sender openFile:(NSString *)filename;
- (void)application:(NSApplication *)sender openFiles:(NSArray *)filenames;
- (BOOL)application:(NSApplication *)sender openTempFile:(NSString *)filename;
- (BOOL)applicationShouldOpenUntitledFile:(NSApplication *)sender;
- (BOOL)applicationOpenUntitledFile:(NSApplication *)sender;
- (BOOL)application:(id)sender openFileWithoutUI:(NSString *)filename;
- (BOOL)application:(NSApplication *)sender printFile:(NSString *)filename;
- (NSApplicationPrintReply)application:(NSApplication *)application
printFiles:(NSArray *)fileNames
withSettings:(NSDictionary *)printSettings
showPrintPanels:(BOOL)showPrintPanels;

- (void)application:(NSApplication *)sender printFiles:(NSArray *)filenames;
- (BOOL)applicationShouldTerminateAfterLastWindowClosed:(NSApplication *)sender;
- (BOOL)applicationShouldHandleReopen:(NSApplication *)sender hasVisibleWindows:(BOOL)flag;
- (NSMenu *)applicationDockMenu:(NSApplication *)sender;
- (NSError *)application:(NSApplication *)application willPresentError:(NSError *)error;
@end

This may seem rather odd. Why would we declare a category on NSObject for delegate methods? Depending on your language background, you might be wondering why this isn't a protocol or interface. The answer is simple, really. In Objective-C prior to 2.0, protocols (sometimes referred to as "formal protocols") did not allow optional methods. If you conformed to a protocol, you had to implement every method in that protocol. That wouldn't have worked very well; the Apple and NeXT engineers didn't want to force programmers to respond to every conceivable method any application delegate would ever need in their own delegates. Rather, they wanted to let programmers implement only the delegate methods that they needed. By declaring it as a category and creating what we call an "informal protocol", the compiler and the programmer are told what methods this delegate can respond to, but no obligation is imposed on the programmer to implement any particular method. It's perfectly valid (though silly) for a delegate to respond to none of the delegate methods.

In the Cocoa approach to delegates, the mutator method for a delegate usually looks like this:

- (void)setDelegate:(id)anObject;

In other words, an instance of any class can be set as the delegate. If that delegate implements a particular delegate method, that method will be called at the appropriate time. If it doesn't implement it, NSApplication will simply skip the call and continue execution of the program. It's a very laissez-faire approach that puts a lot of trust in the programmer. That's Cocoa in a nutshell, really. Objective-C doesn't give the programmer mechanisms to completely lock out other programmers the way, say, Java, C++, and C# do. There are no final classes, and declaring things @private is really more of a suggestion. The nature of the language leads to a different approach to many things. It may seem weird - even wrong - if you come from one of the many languages that derive their object-model from the far-less-trusting Simula, but give it time. Properly used, it's very elegant.

On the other hand. UIApplicationDelegate, the iPhone's application delegate, is not implemented using a category, it's implemented using a formal protocol. Since the iPhone was developed after Objective-C 2.0 was released, Apple had the option of using optional methods in formal protocols, so in Cocoa Touch most (I think all) delegates are defined as formal protocols. This is what the application delegate looks like in Cocoa Touch. Again, I have reformatted and removed comments to make it read easier in this context:

@protocol UIApplicationDelegate<NSObject>
@optional
- (void)applicationDidFinishLaunching:(UIApplication *)application;
- (void)applicationDidBecomeActive:(UIApplication *)application;
- (void)applicationWillResignActive:(UIApplication *)application;
- (BOOL)application:(UIApplication *)application handleOpenURL:(NSURL *)url;
- (void)applicationDidReceiveMemoryWarning:(UIApplication *)application;
- (void)applicationWillTerminate:(UIApplication *)application;
- (void)applicationSignificantTimeChange:(UIApplication *)application;
- (void)application:(UIApplication *)application
willChangeStatusBarOrientation:(UIInterfaceOrientation)newStatusBarOrientation
duration:(NSTimeInterval)duration;

- (void)application:(UIApplication *)application
didChangeStatusBarOrientation:(UIInterfaceOrientation)oldStatusBarOrientation;

- (void)application:(UIApplication *)application
willChangeStatusBarFrame:(CGRect)newStatusBarFrame;

- (void)application:(UIApplication *)application
didChangeStatusBarFrame:(CGRect)oldStatusBarFrame;

@end


It's really not all that different. Of course, the delegate methods are not exactly the same due to the different nature of the devices for which Cocoa and Cocoa Touch were designed, but this protocol says basically the same that our earlier informal protocol said: that any object can be a delegate, and since it declares all of the methods as @optional, the delegate only needs to implement those methods it cares about. The mutator method for a delegate in Cocoa Touch looks similar to the one in Cocoa, but with two differences. First, UIApplication uses Objective-C 2.0's properties and synthesizes the mutator rather than declaring a mutator manually. Second, and more important, though it still accepts id (meaning any object), it requests that the object being assigned as the delegate conform to the UIApplicationDelegate protocol:

@property(nonatomic,assign) id<UIApplicationDelegate> delegate;

This being Objective-C, you actually can assign any object to be the delegate, even one that doesn't conform to the protocol, but if the assigned object's class doesn't explicitly conform to UIApplicationDelegate, you will get a compile-time warning. Fortunately, all the iPhone application templates give you your application delegate, and the provided delegate's class already conforms to that property, so you rarely ever need to do that step for the application delegate, but when it comes to the countless other delegates in Cocoa Touch, you will need to explicitly conform your class to the delegate protocol.

Whether Cocoa will switch to using formal protocols is, as of now, an unanswered question. So far, I've seen no indication of Apple making this change. As of Leopard, every delegate I can think of is implemented as a category on NSObject, but I also don't have any more information about what's going in inside the loop than most of you do and don't know if they have plans to change this. They might. They might not.

What I do know is that not everybody agrees that the newer way of doing delegates is better. I've talked to a number of old-time Cocoa/Mac programmers who view the new approach as less elegant and see it as an unnecessary change. I don't really have much of an opinion, to be honest. As a practical matter, there's not much of a difference in the way we use delegates with either approach. Other than conforming our classes to one protocol, pretty much everything works the same.

The new approach is a tradeoff. It adds some compiler-time checks that aren't available with informal protocols, but it also has some unintended consequences. For example, when you have subclasses of classes with delegates, you can have situations where a delegate has to conform to a protocol for an object for which it's not the delegate. You can see an example of this in Beginning iPhone Development in Chapter 16. On page 467, we conform CameraViewController to UINavigationControllerDelegate, even though we're not a delegate of a UINavigationController. Why? Because UIImagePickerController is a subclass of UINavigationController. Because both of those classes have delegates, and their delegates require different protocols. As a result, the compiler forces us to conform to both protocols. It's a pretty minor inconvenience - it requires typing a single class name - but this seems to rub some Cocoa developers I've talked to as being very "un-Cocoa-like" and inelegant.

Let's finish off with one important last bit of information about delegates that applies to both Cocoa and Cocoa Touch: as a general rule, objects do not send a -retain message to their delegate. There are a few exceptions to this, but unless a class is specifically documented as retaining its delegate, assume that they don't. If your delegate gets deallocated before the class it is a delegate for, you should make sure to set the class' delegate to nil before your class is deallocated to avoid problems.



Friday, March 27, 2009

Icons for Multiple Developer Tool Installs

If you have multiple copies of the Developer tools installed on your Mac (say, one for SDK 2.2.1, and one for 3.0), somebody on Twitter (maybe Craig Hockenberry?) suggested altering the icon of one of the two copies of Xcode so you can tell which is which in the Dock. I thought that was a brilliant idea.

Here is an icon file you can use if you want. Right-click and choose "Download Linked File" to save it.

This is what it looks like (nothing too fancy):


To change the icon, navigate to the Applications subdirectory of the beta /Developer folder, then right-click on Xcode.app and select Show Package Contents to open up the bundle. Once the new window comes up, drag the new appicon.icns to the folder inside the bundle called Resources, choosing to replace the existing file.

You may need to log out and back in to make the change take effect in your dock. You could also quit the Dock from the command line, or remove and re-add the beta Xcode icon.



NSConference

There's another great conference coming before WWDC. It's one I won't be able to attend, but it looks pretty awesome. If you're in the UK or even from continental Europe and can't afford the trip to WWDC, NSConference may be an acceptable alternative to tide you over till next year.

It's not an official Apple event like WWDC so you won't get the benefit of talking to the Apple engineers or seeing Steve Jobs speak (assuming he does at WWDC, that is), but they've got a heck of a lineup of speakers including the incomparable Mike Lee (now an Apple employee), Bill Dudney, Matt Gemmell, Andre Pang, Frasier Speirs, Philippe Mougin, Graham Lee, Marcus Zarra, and Drew McCormack.

I rather wish I could go, to be honest. Maybe next year.



Xcode Single Window Mode

Even though it's on the first pane you see in Xcode's preferences, a lot of people don't realize that Xcode has a couple of different modes it can work in, including an "All-in-One" mode (often referred to as "single-window mode"). The original Project Builder IDE (Xcode's predecessor) used the multiple window paradigm that's familiar to long time Mac and NeXT users. To a large extent, Xcode follows that basic model in its default mode. The debugger, breakpoints, console, and compile error feedback all comes to you by way of different windows.

Now, if you're on a machine with multiple monitors, this really still is the way to go. You can put your breakpoints and console on one monitor and run your program, or the iPhone Simulator on the other, for example, so you can always see your console and breakpoints. You can move the various windows around to best use your available screen real estate.

On the other hand, if you are on a laptop, or some other single-screen setup, especially one with a smaller screen, all those windows can be a little obnoxious. Xcode's single-window mode is ideal for these scenarios.

I've been using Xcode (and before that Project Builder) long enough that I was resistant to the change. But, I'm a laptop guy. I live on my laptop, and only rarely hook up a second display. This is a habit I developed from seven years of non-stop travel. I've gotten quite efficient using just the screen on my 17" MacBook Pro and find that I don't use extra screen real estate in the form of a second monitor effectively when it's available.

Recently, I realized I wasn't as efficient as I should be, though. All those extra windows in Xcode were hard to manage, and I was using exposé constantly because I usually work with Xcode's project windows maximized to take up the full screen. So, I decided to give the all-in-one mode a chance.

I had to revert back to the default mode yesterday to take some screenshots for my current project. In less than a week, I've become not just a fan of the all-in-one mode, I'm addicted to it. I hated turning it off. I hate that I can't use all-in-one mode for writing projects.

I made a few minor changes to make single-window mode work better for me. First, I re-mapped ⌘0 (command-zero, the default project view key binding) to ⌘1 (command one), and re-mapped ⇧⌘R (the debugger window key mapping, which takes you to the other project view in single-window mode) to ⌘2. Doing that, I can easily swap back and forth between the two views with one hand. I can't remember who this tip came from - someone on Twitter - but it's a lifesaver.

I've also tweaked the toolbar a little in single-window mode. Chances are I will be tweaking more as I get more accustomed to this layout, but this is my current layout:

If you work on a single-display, it's probably worth your time to give all-in-one mode a spin for a few days.