Monday, May 25, 2009

OpenGL ES From the Ground Up, Part 6: Textures and Texture Mapping

An alternative to creating materials in OpenGL ES to define the color of a polygon is to map a texture onto that polygon. This is a handy options that can give you good looking objects while saving a lot of processor cycles. Say you wanted to create a brick wall in a game. You could, of course, create a complex object with thousands of vertices to define the shape of the individual bricks and the recessed lines of mortar between the bricks.

Or you could create a single square out of two triangles (four vertices) and map a picture of a brick wall onto the square. Simple geometries with texture maps render much faster than complex geometries using materials.

Turning Things On

In order to use textures, we need to flip some switches in OpenGL to enable the features we need:

glBlendFunc(GL_ONE, GL_SRC_COLOR);

The first function call switches on the ability to use two-dimensional images. This call is absolutely essential; you can't map an image onto a polygon if you don't turn on the ability to use textures in the first place. It can be enabled and disabled as necessary, however there's typically no need to do that. You can draw without using textures even with this turned on, so you can generally just make this call once in a setup method and forget about it.

The next call turns on blending. Blending gives you the ability to combine images in interesting ways by specifying how the source and destination will be combined. This would allow you, for example, to map multiple textures to a polygon to create interesting textures. "Blending" in OpenGL, however, refers to any combining of images or of an image with a polygon's surface, so you need to turn blending on even if you don't need to blend multiple images together.

The last call specifies the blending function to use. A blending function defines how the source image and the destination image or surface are combined. Without getting too far ahead of ourselves, OpenGL is going to figure out (based on information we'll give it) how to map the individual pixels of the source texture to the portion of the destination polygon where it will be drawn.

Once OpenGL ES figures out how to map the pixels from the texture to the polygon, it will then use the specified blending function to determine the final value for each pixel to be drawn. The glBlendFunc() function is how we specify how it should do those blending calculations, and it takes two parameters. The first parameter defines how the source texture is used. The second defines how the destination color or texture already is used. In this simple example, we want the texture to draw completely opaque, ignoring the color or any existing texture that's already on the polygon, so we pass GL_ONE for the source, which says that the value of each color channel in the source image (the texture being mapped) will be multiplied by 1.0 or, in other words, will be used at full intensity. For the destination, we pass GL_SRC_COLOR, which tells it to use the color from the source image that has been mapped to this particular spot on the polygon. The result of this particular blending function call is an opaque texturing. This is probably the most common value you'll use. We'll perhaps look at the blending functions in more detail in a future installment in this series, but this is the combination you'll use most often and it's the only blending function we'll be using today.
NOTE: If you've used OpenGL and already know about blending functions, you should be aware that OpenGL ES does not support all of the blending function constants that OpenGL supports. The following are the only ones OpenGL ES allows: GL_ZERO, GL_ONE, GL_SRC_COLOR, GL_ONE_MINUS_SRC_COLOR, GL_DST_COLOR, GL_ONE_MINUS_DST_COLOR, GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_DST_ALPHA, GL_ONE_MINUS_DST_ALPHA, and GL_SRC_ALPHA_SATURATE (which one can be used for source only).

Creating Textures

Once you've enabled textures and blending, it's time to create our textures. Typically, textures are created at the beginning of the program execution or at the beginning of a level load in a game, before you begin displaying the 3D view to the user. This is not a requirement, just usually a good idea because creating the textures takes a bit of processing power and can cause noticeable hiccups in program execution if done after you've begun to display complex geometries.

Every image in OpenGL ES is a texture, and textures cannot be displayed to the user exept when mapped onto an object. Well, there is one quasi-exception to that rule called point sprites which allow you to draw an image at a given point, but those have their own quirks, so that's a topic for a separate posting. In general, though, any image you display to the user has to be placed on triangles defined by vertices, sort of like applying stickers to them.

Generating a Texture Name

To create a texture, you tell OpenGL ES generate a texture name for you. This is a confusing term, because a texture's name is actually a number: a GLuint to be specific. Though the term "name" would indicate a string to any sane person, that's not what it refers to in the context of OpenGL ES textures. It's an integer value that represents a given texture. Each texture will be represented by a unique name, so passing a texture's name into OpenGL is how we identify which texture we want to use.

Before we can generate the texture name, however, we need to declare an array of GLuints to hold the texture name or names. OpenGL ES doesn't allocate the space for texture names, so we have to declare an array like so:

    GLuint      texture[1];

Even if you only are using one texture, it's still common practice to use a one-element array rather than declaring a single GLuint because the function used to generate texture names expects a pointer to an array. Of course, it's possible to declare a single GLuint and coerce the call, but it's just easier to declare an array.

In procedural programs, textures are often stored in a global array. in Objective-C programs, it's much more common to use an instance variable to hold the texture name. Here's how we ask OpenGL ES to generate one or more texture names for us:

    glGenTextures(1, &texture[0]);

You can create more than one texture when you call glGenTextures(); the first number you pass tells OpenGL ES how many texture names it should generate. The second parameter needs to be an array of GLuints large enough to hold the number of names specified. In our case, our array has a single element, and we're asking OpenGL ES to generate a single texture name. After this call, texture[0] will contain the name for our texture, so we'll use texture[0] in all of our texture-related calls to specify this particular texture.

Binding the Texture

After we generate the name for the texture, we have to bind the texture before we can provide the image data for that texture. Binding makes a particular texture active. Only one texture can be active at a time. The active or "bound" texture is the one that will be used when a polygon is drawn, but it's also the one that new texture data will be loaded into, so you must bind a texture before you provide it with image data. That means you will always bind every texture at least once to provide OpenGL ES with the data for that texture. During runtime, you will bind textures additional times (but won't provide the image data again) to indicate that you want to use that texture when drawing. Binding a texture is a simple call:

    glBindTexture(GL_TEXTURE_2D, texture[0]);

The first parameter will always be GL_TEXTURE_2D because we are using two-dimensional images to create our texture. Regular OpenGL supports additional texture types, but the version of OpenGL ES that currently ships on the iPhone only supports standard two-dimensional textures and, frankly, even in regular OpenGL, two-dimensional textures are used far more than the other kinds.

The second parameter is the texture name for the texture we want to bind. After calling this, the texture for which we previously generated a name becomes the active texture.

Configuring the Image

After we bind the image for the first time, we need to set two parameters. There are several parameters we CAN set if we want to, but two that we MUST set in order for the texture to show up when working on the iPhone:


The reason why these two must be set, is that the default value is set up to use something called a mipmap. I'm not going to get into mipmaps today, but put simply, we're not using them. Mipmaps are combinations of an image at different sizes, which allows OpenGL to select the closest size version to avoid having to do as much interpolation and to let it manage memory better by using smaller textures when objects are further away from the viewer. The iPhone, thanks to its vector units and graphics chip, is actually pretty good at interpolating images, so we're not going to bother with mipmaps today. I may do a future posting on them, but for today, we're just going to tell OpenGL ES to scale the one image we're giving it to whatever size it needs using linear interpolation. We have to make two calls, because one, GL_TEXTURE_MIN_FILTER is used for situations where the texture has to be shrunk down to fit on the polygon, the other, GL_TEXTURE_MAG_FILTER is used when the texture has to be magnified or increase in size to fit on the polygon. For both cases, we pass GL_LINEAR to tell it to scale the image using a simple linear interpolation algorithm.

Loading the Image Data

Once we've bound a texture for the first time, it's time to feed OpenGL ES the image data for that texture. There are two basic approaches to loading image data on the iPhone. If you find code in other books for loading texture data using standard C I/O, those will likely work as well, however these two approaches should cover the bulk of situations you will experience.

the UIImage Approach

If you want to use a JPEG, PNG, or any other image supported by UIImage, then you can simply instantiate a UIImage instance with the image data, then generate RGBA bitmap data for that image like so:

    NSString *path = [[NSBundle mainBundle] pathForResource:@"texture" ofType:@"png"];
NSData *texData = [[NSData alloc] initWithContentsOfFile:path];
UIImage *image = [[UIImage alloc] initWithData:texData];
if (image == nil)
NSLog(@"Do real error checking here");

GLuint width = CGImageGetWidth(image.CGImage);
GLuint height = CGImageGetHeight(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc( height * width * 4 );
CGContextRef context = CGBitmapContextCreate( imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
CGColorSpaceRelease( colorSpace );
CGContextClearRect( context, CGRectMake( 0, 0, width, height ) );
CGContextTranslateCTM( context, 0, height - height );
CGContextDrawImage( context, CGRectMake( 0, 0, width, height ), image.CGImage );

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);


[image release];
[texData release];

The first several lines are fairly straightforward - we're just loading an image called texture.png out of our application's bundle, which is how we pull resources included in the Resources folder of our Xcode project. Then, we use some core graphics calls to get the bitmap data into RGBA format. This basic approach lets us use any kind of image data that UIImage supports and convert it to data in a format that OpenGL ES can accept.

Note: Just because UIImage doesn't support a filetype you want to use doesn't mean you can't use this approach. It is possible to add support to UIImage for additional image filetypes using Objective-C categories. You can see an example of doing that in this blog posting where I added support to UIImage for the Targa image filetype.

Once we have the bitmap data in the right format, we use the call glTexImage2D() to pass the image data into OpenGL ES. After we do that, notice that we free up a bunch of memory, including the image data and the actual UIImage instance. Once you've given OpenGL ES the image data, it will allocate memory to keep its own copy of that data, so you are free to release all of the image-related memory you've used, and you should do so unless you have another fairly immediate in your program for that data. Textures, even if they're made from compressed images, use a lot of your application's memory heap because they have to be expanded in memory to be used. Every pixel takes up four bytes, so forgetting to release your texture image data can really eat up your memory quickly.

The PVRTC Approach

The graphic chip used in the iPhone (the PowerVR MBX) has hardware support for a compression technology called PVRTC, and Apple recommends using PVRTC textures when developing iPhone applications. They've even provided a nice Tech Note that describes how to create PVRTC textures from standard image files using a command-line program that gets installed with the developer tools.

You should be aware that you may experience some compression artifacts and a small loss of image quality when using PVRTC as compared to using standard JPEG or PNG images. Whether the tradeoff makes sense for your specific application is going to depend on a number of factors, but using PVRTC textures can save a considerable amount of memory.

Loading PVRTC data into the currently bound texture is actually even easier than loading regular image types, although you do need to manually specify the width and height of the image since there are no Objective-C classes that can decode PVRTC data and determine the width and height1.

Here's an example of loading a 512x512 PVRTC texture using the default texturetool settings:

    NSString *path = [[NSBundle mainBundle] pathForResource:@"texture" ofType:@"pvrtc"];
NSData *texData = [[NSData alloc] initWithContentsOfFile:path];

// This assumes that source PVRTC image is 4 bits per pixel and RGB not RGBA
// If you use the default settings in texturetool, e.g.:
// texturetool -e PVRTC -o texture.pvrtc texture.png
// then this code should work fine for you.
glCompressedTexImage2D(GL_TEXTURE_2D, 0, GL_COMPRESSED_RGB_PVRTC_4BPPV1_IMG, 512, 512, 0, [texData length], [texData bytes]);

That's it. Load the data from the file and feed it into OpeNGL ES using glCompressedTexImage2D(). There is absolutely no difference in how you use the textures based on whether they were created with PVRTC compressed images or regular images.

Texture Limitations

Images used for textures must be sized so that their width and height are powers of 2, so both the width and height should be 2, 4, 8, 16, 32, 64, 128, 256, 512, or 1024. An image could be, for example, 64x128, or 512x512.

When using PVRTC compressed images, there is an additional limitation: The source image must be square, so your images should be 2x2, 4x4 8x8, 16x16, 32x32, 64x64, 128x128, 256x256, etc. If you have a texture that is inherently not square, then just add black padding to make the image square and then map the texture so that only the part you want to use shows on the polygon. Let's look at how we map the texture to the polygon now.

Texture Coordinates

When you draw with texture mapping enabled, you have to give OpenGL ES another piece of data, which is the texture coordinates for each vertex in your vertex array. Texture coordinates define what part of the image is used on the polygon being mapped. Now, the way this works is a little odd. You have a texture that's either square or rectangular. Imagine your texture sitting with its lower left corner on the origin of a two-dimensional plane and with a height and width of one unit. Something like this:


Let's think of this as our "texture coordinate system", instead of using x and y to represent the two dimensions, we use s and t for our texture coordinate axes, but the theory is exactly the same.

In addition to the s and t axes, the same two axes on the polygon the texture is being mapped onto are sometimes referred to by the letters u and v. It is from this naming convention that the term UV Mapping often see in many 3D graphics programs is derived.


Okay, now that we understand the texture coordinate systems, let's talk about how we use those texture coordinates. When we specify our vertices in a vertex array, we need to provide the texture coordinates in another array surprisingly called a texture coordinate array. For each vertex, we're going to pass in two GLfloats (s, t) that specify where on that coordinate system illustrated above each vertex falls. Let's look at the simplest possible example, a square made out of a triangle strip with the full image mapped to it. To do that, we'll create a vertex array with four vertices in it:


Now, let's lay our diagrams on top of each other, and the values to use in our coordinate array should become obvious:


Let's turn that into an array of GLfloats, shall we?

    static const GLfloat texCoords[] = {
0.0, 1.0,
1.0, 1.0,
0.0, 0.0,
1.0, 0.0

In order to use a texture coordinate array we have to (as you might have guessed) enable the feature. we do that using our old friend glEnableClientState(), like so:


To pass in the texture coordinate array, we call glTexCoordPointer():

    glTexCoordPointer(2, GL_FLOAT, 0, texCoords);

and then we have to... no, we're good. That's it. Let's put it all together into a single drawView: method. This assumes there's a single texture already bound and loaded.

- (void)drawView:(GLView*)view;
static GLfloat rot = 0.0;

glColor4f(0.0, 0.0, 0.0, 0.0);


static const Vertex3D vertices[] = {
{-1.0, 1.0, -0.0},
{ 1.0, 1.0, -0.0},
{-1.0, -1.0, -0.0},
{ 1.0, -1.0, -0.0}
static const Vector3D normals[] = {
{0.0, 0.0, 1.0},
{0.0, 0.0, 1.0},
{0.0, 0.0, 1.0},
{0.0, 0.0, 1.0}
static const GLfloat texCoords[] = {
0.0, 1.0,
1.0, 1.0,
0.0, 0.0,
1.0, 0.0

glTranslatef(0.0, 0.0, -3.0);
glRotatef(rot, 1.0, 1.0, 1.0);

glBindTexture(GL_TEXTURE_2D, texture[0]);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glNormalPointer(GL_FLOAT, 0, normals);
glTexCoordPointer(2, GL_FLOAT, 0, texCoords);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);


static NSTimeInterval lastDrawTime;
if (lastDrawTime)
NSTimeInterval timeSinceLastDraw = [NSDate timeIntervalSinceReferenceDate] - lastDrawTime;
rot+= 60 * timeSinceLastDraw;

lastDrawTime = [NSDate timeIntervalSinceReferenceDate];

Here's what the texture I'm using looks like:

Stock image, used with express permission of author.

When I run the code above using this texture, here is the result:


Wait a second Kemosabe: That's not right! If you look carefully at the texture image and the screenshot above, you'll notice that they're not quite the same. The screenshot has the y-axis (or the t-axis, if you will) inverted. It's upside down, but not rotated, just flipped.

T-Axis Inversion Conundrum

We haven't really done anything wrong in an OpenGL sense, but the result is definitely wrong. The reason is that there's an iPhone-specific quirk at play. The iPhone's graphic coordinate system used in Core Graphics and everywhere that's not OpenGL ES uses a y-axis that increases as you go toward the bottom of the screen. In OpenGL ES, of course, the y-axis runs the opposite way, with increases in y going toward the top of the screen. The result, is that the image data we fed into OpenGL ES earlier, as far as OpenGL ES is concerned is flipped upside down. So, when we map the image using the standard OpenGL ST mapping coordinates, we get a flipped image.

Fixing for Regular Images
When you're working with non-PVRTC images, you can flip the coordinates of the image before you feed it to OpenGL ES, by inserting the following two lines of code into the texture loading, right after creating the context:

        CGContextTranslateCTM (context, 0, height);
CGContextScaleCTM (context, 1.0, -1.0);

That will flip the coordinate system of the context before we draw into it, resulting in data in the format that OpenGL ES wants. Once we do that, all is right with the world:


Fixing for PVRTC Images
Since there are no UIKit classes that can load or manipulate PVRTC images, we can't easily flip the coordinate system of compressed textures. There are a couple of ways that we could deal with this.

One would be to simply flip the image vertically in a program like Acorn or Photoshop before turning it into a compressed texture. Although this feels like a hack, in many situations it will be the best solution because it does all the processing work beforehand, so it requires no additional processing overhead at runtime and it allows you to have the same texture coordinate arrays for compressed and non-compressed images.

Alternatively, you could subtract your t-axis value from 1.0. Although subtraction is pretty fast, those fractions of a second can add up, so in most instances, avoid doing the conversion every time you draw. Either flip the image, or invert your texture coordinates at load time before you start displaying anything.

More Mapping

Notice in our last example that the entire image is shown on the square that's been drawn. That's because the texture coordinates we created told it to use the entire image. We could change the coordinate array to use only the middle portion of the source image. Let's use another diagram to see how we would use just the middle portion of the image:


So, that would give us a coordinate array that looked like this:

    static const GLfloat texCoords[] = {
0.25, 0.75,
0.75, 0.75,
0.25, 0.25,
0.75, 0.25

And if we run the same program with this new mapping in place, we get a square that displays only the center part of the image:


Similarly, if we wanted to show only the lower left quadrant of the texture on the polygon:


Which you can probably guess, translates to this:

    static const GLfloat texCoords[] = {
0.0, 0.5,
0.5, 0.5,
0.0, 0.0,
0.5, 0.0

And looks like this:

Wait, There's More

Actually, there's not really more, but the power of this may not be obvious from mapping a square to a square. The same process actually works with any triangle in your geometry, and you can even distort the texture by mapping it oddly. For example, we can define an equilateral triangle:


But actually map the bottom vertex to the lower-left corner of the texture:


Mapping this way doesn't change the geometry - it will still draw as an equilateral triangle, not a right triangle, but OpenGL ES will distort the texture so that the portion of the triangle shown in the bottom diagram is displayed on the equilateral triangle. Here's what it would look like in code:

- (void)drawView:(GLView*)view;
glColor4f(0.0, 0.0, 0.0, 0.0);


static const Vertex3D vertices[] = {
{-1.0, 1.0, -0.0},
{ 1.0, 1.0, -0.0},
{ 0.0, -1.0, -0.0},

static const Vector3D normals[] = {
{0.0, 0.0, 1.0},
{0.0, 0.0, 1.0},
{0.0, 0.0, 1.0},
static const GLfloat texCoords[] = {
0.0, 1.0,
1.0, 0.0,
0.0, 0.0,

glTranslatef(0.0, 0.0, -3.0);

glBindTexture(GL_TEXTURE_2D, texture[0]);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glNormalPointer(GL_FLOAT, 0, normals);
glTexCoordPointer(2, GL_FLOAT, 0, texCoords);
glDrawArrays(GL_TRIANGLES, 0, 3);


And when run, it looks like this:


Notice how the curlicues that were in the lower left of our square are now at the bottom of our triangle. Basically, any point on the texture can be mapped to any point on the polygon. Or, in other words, you can use any (s,t) for any vertex (u,v) and OpenGL ES will do what it needs to do to map it.

Tiling & Clamping

Our texture coordinate system goes from 0.0 to 1.0 on both axes. So, what happens if you specify something outside of those values? Well, there are two options, depending on how you setup your view.

Tiling aka Repeating

One option, is to tile the texture. If you choose this option, called "repeating" in OpenGL parlance. If we take our first texture coordinate array and change all the 1.0s to 2.0s:

    static const GLfloat texCoords[] = {
0.0, 2.0,
2.0, 2.0,
0.0, 0.0,
2.0, 0.0

Then we'd get something like this:


If this is the behavior you want, you can turn it on by using glTexParameteri() in your setupView: method, like so:



The other possible option is to have OpenGL ES simply clamp any value greater than 1.0 to 1.0 and any value less than 0.0 to 0.0. This essentially causes the edge pixels to repeat, which tends to give an odd appearance. Here is same exact picture using clamp instead of repeat.


If you want this option, you would include the following two lines of code in your setup instead of the previous two:


Notice that you have to specify the behavior for the s and t axes separately, so it's possible to clamp in one direction and tile in the other.

This is the End

Well, that should give you a basic handle on the mechanism used to map textures to polygons in OpenGL ES. Even though it's simple, it's kind of hard to wrap your head around exactly how it works until you've played with a bit, so feel free to download the accompanying projects and play around with the values.

Next time, I do believe we'll be entering the Matrix, so make sure you come on back.

  1. Actually, there is pre-release sample code that shows how to read the PVRT Header from the file to determine the width and height of the image, along with other details of the compressed image file. I didn't use it because a) since it's unreleased, I assumed it would be a violation of the NDA to use this code in a blog posting, and b) the code is not capable of reading all PVRTC files, including the one in the sample project for this article

Thanks to George Sealy and Daniel Pasco for helping with the t-axis inversion issue. Thanks also to "Colombo"from the Apple DevForums.


Byte Junky said...

Nice tutorial Jeff. Your right that it can take a little time to get your head around the texture mapping, but a little playing with the values is a good way to learn.

Even though I feel I've got texture mapping down, I always like good tutorials to make sure I'm not missing anything.



Byte Junky said...

Hope you don't mind, but I'm going to put a link to this great tutorial on my blog at



Jeff LaMarche said...

Byte Junky:

Thanks. You never need to ask permission to link to me. I'm pretty easy going - the only thing that irks me is when somebody re-runs my material without identifying me as the author. Linking is always fine and welcome.

Nice blog, by the way - I really like the design.

Jan Geffert said...

Great Post!

I think on the iPhone the min. resolution is 64x64, so smaller resolutions will not work.

Looking forward to mipmaps!


Vargo said...

You have made my Memorial Day Weekend by posting this article. :) I've been waiting eagerly, and this is exactly what I needed.

Byte Junky said...

Thanks Jeff. It really just started out as a project to give something back for all the times I've read other peoples tutorials. Its really taken off and I'm having a lot of fun learning and writing the tutorials.

Your tutorials have certainly played a role in getting me up to speed so thanks for that and I will continue to be a regular visitor.


slimemoldjuice said...

Just a quick theoretical note:

Why is it important to use triangles instead of other geometric shapes?

Triangle geometry speeds up graphical pipeline calculations because triangles are assuredly coplanar - three points are indeed always coplanar. This saves a tremendous amount of compute time when determining shading properties for that surface.

So, you should *always* try to form your geometry out of triangles. There is *never* a case for not using triangle geometry... indeed, even for NURB surfaces, the opengl pipelines at some point is converting that surface into a triangle mesh.

So, save your GPU some cycles and learn to love triangles!

Pat said...

Tremendous tutorials Jeff. You are certainly inspiring me to take my iPhone app development to the next level.

Now, if I could only come up with the next killer idea!!



Jeff LaMarche said...


I'm not sure. I think you can use smaller sizes, but if you use PVRT, I think they just won't use any less space because they'll be padded to align with 32-bit boundaries. I think non-compressed textures can be any powers of 2 (up to 1024 according to documentation, but you actually can use larger, it's just kinda silly to do so). It's possible what you say is true - I'll try and find out.


In regular OpenGL, triangles are faster to render, and there's no polygon that can't be broken down into triangles. For OpenGL ES, there's simply no choice. Quads are not supported, nor are larger polygons. You can use points, lines, and line loops, but everything else creates triangles - triangles, triangle fans, triangle strips. It's a hard-coded limitation of ES.


Thank you, Happy Memorial Day :)


Good luck - coming up with the ideas for a great program is never easy

Mark Johnson said...

The company behind the graphics chip in the iPhone has a lot of free code you can get, including various utilities for PVRT file handling and creating optimized data to feed into OpenGL.

Most of this code is ported to work right away on iPhone in the oolong graphics engine:
(To use, turn OFF thumb compile and change the developer profiles in the XCode projects.)

Thanks for the great tutorials Jeff.

Jeff LaMarche said...

Thanks, Mark! I keep forgetting about the PowerVR code, and don't think I've said anything about Thumb in this series... maybe I'll do a dedicated posting on optimizing at some point. But yeah, everyone, turn off "Compile for Thumb" when you're doing OpenGL ES work. Thumb mode is no good when you're doing tons of floating point calculations, hurts your performance something fierce.

Stan said...

Any chance you can post an example of texture/uv scrolling/translation? I'm working on an app with my girlfriend and we're doing trouble simulating a moving ground plane through moving textures and I'm staring to doubt it's even possible since it's so hard to find any examples of this anywhere.

devangpandya said...

hi Jeff. Many thanks for the tutorial. Quite easy to follow. One thing i didn't get is how can i specify the position of the texture on the screen. I meaan if i have to move the image/texture where should i specify it..
Thanks again.

FrankMarco said...

Hi Jeff!
I was wondering if it is possible to map multiple images to a polygon?
For instance, if you wanted to create a cube, and map a separate image on each face.
If so, I don't see how this would be done. Specifically in the setupView the call to glTexImage2D does not seem to have a way to specify which texture the image will be linked to.
Am I missing something?

Thank You,

Zip Games said...

I think there might by some problems with the color blending and transparency in this example. Jeff, please don't take offense, I'm just mentioning these problems. I really liked this tutorial, (being an openGL noob myself), but when using transparent images, it didn't really work. I did some research, and found this:
People should put that in place of the:
glBlendFunc(GL_ONE, GL_SRC_COLOR);
in the tutorial. Also, when you set glColor4f(), what point is that serving? If you turn off lights, the image isn't displayed at all if you use glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);. If you use glBlendFunc(GL_ONE, GL_SRC_COLOR); then it is displayed as black. Here's what I think is happening. I think that when you use lights, the lights slightly mix the color of the pitch black with the light. I don't think that's a good idea. Shouldn't you just leave the color as white with full transparency?

Aks said...

Hi Jeff,
Your posts are too good...

I have a request for a tutorial.
I want to create a 3D arc.How can i achieve this?
Also, want to make them like the 3D walls of game "Labyrinth"

It would be highly appreciable if you guide for this.


boilerbill said...

Hi Jeff,

Thank you for your postings! Your writing style is amazing. Being a complete newbie to 3D programming I am stuck on being able to get this code to work on OS 3.0. I get to see the texture for a brief second as the app closes, otherwise it is a white screen. Any helpful ideas on where to look to troubleshoot?

Rafael said...


You just need to remove this line in the file Wavefront_OBJ_LoaderAppDelegate.m :
window = [[UIWindow alloc] initWithFrame:rect];

Actually, the window is already allocated by the NIB file, so calling this again will create a new window.

Sri said...

Hi Jeff,
Terrifc Blog. Love the stuff on OpenGL. Very very nice !

Once a texture is created, bound and rendered, how can I now access individual pixels and change Alpha values on them? Is there a way to do this?

I have researched the web like crazy and have not found anything thats clear on how to do this.

Any help or information is hugely appreciated.


Steilpass said...

As said before: Great tutorial!

Now I am trying to adapt the example and use my own textures. And some pngs just won't work.

Are there special requirements to the pngs? Is there a way to those?

Matthias said...

So figured out the problem:
If you modify the texture.png in it's size:
a) so that it's not a sqaure or
b) a square with the width and height not a power of 2 (like 200*200)
the example won't work. I am still trying to figure out the solution.

And the example doesn't work in the simulator under 3.0

Matthias said...

As noted here

"Non-power-of-two textures are currently not supported in OpenGL ES for iPhone OS."

Perfunction said...

Neither your project, or my own code based on your tutorial will work in 3.1 through simulation or on my actual device. However, it works fine in 2.2.1 running in either environment.

The problem is that:
NSString *path = [[NSBundle mainBundle] pathForResource:@"texture" ofType:@"png"];

Is unable to find the correct path and returns nil no matter what I try. I've been trying to find a solution but google isn't being helpful so far.

Perfunction said...

Ok, I was able to get something working under 3.* that actually works out better for me anyways. I still don't understand why loading a single file didn't work.

Here is my batch texture loader:

Vlad Panait said...

Very cool post!
Thank you.

paul said...

Thanks for the post.
Can I use PVR RGBA 4444 textures with the following example code?
Do I use glTexImage2D or glCompressedTexImage2D when using this format

thanks in advance


Matthew said...

Great articles. It is really helping me learn on the iPhone platform. Can I just ask is the reason why this source code doesnt run correctly when compiled with OS 3.0 because 3.0 requires OpenGL ES 2 not 1.1.

I was intending to write a game in 1.1 for the iPhone 3G but did want to compile using the 3.0 SDK so that I had access to the new frameworks (other than OpenGL ES).

Techy said...

Hello Jeff,
I am an iPhone developer and we have been making games with objective-c, but now we are moving into OpenGL ES for various reason, anyways do you think you could make a tutorial on how to make a game like pong or something with OpenGL ES?

Techy said...

Also can you make the game tutorial 2D?

FallOutBoy said...

Great tutorial Jeff!! But it didn't work out-of-the-box (tm) for me on iPhone OS 3.0. I needed to comment the following line out (remembered that I read that in some other blog post of yours):

window = [[UIWindow alloc] initWithFrame:rect];

Thanks a lot!!
Christian Beer

Miari Nicolas said...

Hi Jeff, GREAT tutorial. I started iPhone with your book.

I first tried to insert the corresponding lines into your Project Template but only a black screen would show. I downloaded the project and commented out the line that FallOutBoy says and it works perfectly... Guess I have to dig more into the code.

Developer Junior said...

Very Nice Tutorial , Thanks!

You should update this post with the correction.

*SDK 3.0 issue :

If you're having problems getting any of the older OpenGL ES sample code running under SDK 3.0 - specifically if you get a white screen rather than what you're supposed to see, here is the problem - delete the line of code specified below from the App Delegate's applicationDidFinishLaunching: method:

- (void)applicationDidFinishLaunching:(UIApplication*)application
CGRect rect = [[UIScreen mainScreen] bounds];

window = [[UIWindow alloc] initWithFrame:rect]; // <-- Delete this

GLViewController *theController = [[GLViewController alloc] init];
self.controller = theController;
[theController release];

GLView *glView = [[GLView alloc] initWithFrame:rect];
[window addSubview:glView];

glView.controller = controller;
glView.animationInterval = 1.0 / kRenderingFrequency;
[glView startAnimation];
[glView release];

[window makeKeyAndVisible];


Dave Viner said...

Great tutorial Jeff. I've been following the series (on my own time) and it seems awesome. One question on this tutorial tho.

I follow all the code up to the full definition of drawView. In that function, you use glVertexPointer and glNormalPointer without any real explanation.

I think the glVertexPointer call (and associated GL_VERTEX_ARRAY and vertices) is required because, for OpenGL, you need to specify the shape onto which the texture (image) will be mapped.

I can't figure out what glNormalPointer does here. I don't see it in any previous tutorial, but maybe I missed it.

Also, if you could comment on the glTranslatef and glRotatef calls. Why are these needed here?

Dave Viner said...

Also, I think it'd be great if you could post the source code for this tutorial. There are a few assumptions about what is setup in code that isn't displayed.

mario said...

Thanks Jeff for this useful tutorial.

I wonder if you could you also map a 2D texture on a 3D polygon (i.e. pyramid, cube, cylinder)?

Fig said...

Hello and thank you very much for this tutorial.
I have a little problem I can't figure why it's happening though. I have a textured tank model in blender. I used your export script, thank you again :). The problem is that some parts of the tank, like the turret, barrel are semitransparent for some reason. Any idea why this is happening?

It's not seen clearly here unless you really look at the image. When I spin the camera around it shows better.

Fig said...

I figured it out in the end. It's the glEnable(GL_BLEND).

Leonid said...

Great tutorial format Jeff!
It feels that it comes from heart :)

Guys, I disabled the light in Jeff's code and it results in a black screen. I would expect it to behave just like it was with colors in part 2, before we enabled the light.

I'm surfing the net and I don't have a clue why it happens and I don't even understand the interaction between the light and the texture, because if I enable the lighting, enable light0 and set it position, then I see the texture pretty bright even though I don't set any of the light components(I thought the default values are zeros!).

Does anyone know what is going on?


Leonid said...

Hi again!

I succeeded displaying the texture with light disabled by doing:
and then providing the color array.
The texture appears mixed with the color, which is exactly what shouldn't happen according to this tutorial since we used:
glBlendFunc(GL_ONE, GL_SRC_COLOR);
Which is supposed to do a completely opaque texturing.

If the color arrays consists of pure white colors for each vertex, I get the texture bright and pure as it should be.

I don't understand why I can't do a simple texture with lights disabled and without specifying useless colors for the polygons.

simon said...


I think you need to unbind the texture at the end for other non-texture related drawing to work. ie glBindTexture(GL_TEXTURE_2D, 0);

GunWoo said...

Ah.. so great article.

i'm so happy cause i find this article.

thank you so much.

nosuic said...

Thank you very much, great explanation! I also love your book Beginning iPhone dev!

The xcode project though is not running for me (I tried with the Simulator 3.0 or greater). If anyone has any clue please share. Thank you!

nosuic said...


My problem was similar to you. To avoid lighting and color arrays etc. I added in the setup function glDisable(GL_LIGHTING); so my texture is rendered normally.

nacho4d said...

This tutorial is great Jeff. I have a question, Why this does not work with OpenGLES2 ?
I tried and I get unimplemented error when doing glMatrixMode(GL_VIEWMODE) inside setup view.

I suppose there are other things needed, but what are they?

Joseph said...

There's never been a matrixmode called GL_VIEWMODE.

Here's a reference:

Kevin said...

Thanks for the fantastic lessons. I have been playing with modifying your texture example to load textures onto the faces of a cube. I have found while 4 s/t points work for the front face of the cube, you have to use 8 points (16 textCoords) to get the textures to show on any of the other faces (see example code). Any idea why that is?

// define texture coordinent order for sticking images to faces of cube
static const GLfloat cubeTexCoords[6][16] = {
0,1, 0,0, 1,1, 1,0, 0,1, 0,0, 1,1, 1,0, // works for 1,2,5,6 sides
0,1, 0,0, 1,1, 1,0, 0,1, 0,0, 1,1, 1,0, // works for 1,2,5,6 sides
0,0, 0,1, 1,0, 1,1, 1,1, 0,0, 0,1, 1,0, // works for 3 top
1,0, 1,1, 0,0, 0,1, 1,1, 0,0, 0,1, 1,0, // works for 4 bottom
0,1, 0,0, 1,1, 1,0, 0,1, 0,0, 1,1, 1,0, // works for 1,2,5,6 sides
0,1, 0,0, 1,1, 1,0, 0,1, 0,0, 1,1, 1,0, // works for 1,2,5,6 sides

// Draw the six faces of the cube and add texture image to each face.
int i;
for (i = 0; i < 6; i++) {
glTexCoordPointer(2, GL_FLOAT, 0, cubeTexCoords[i]);
glBindTexture(GL_TEXTURE_2D, textureID[i]);
glDrawElements(GL_TRIANGLE_STRIP, 4, GL_UNSIGNED_BYTE, cubeIndices[i]);
glBindTexture(GL_TEXTURE_2D, 0); // unbinds texture

archana said...

I tried running the example on simulator. It works fine on simulator but shows a blank screen on the iphone. What could be the problem?

Tom said...

holy crap I just spent the whole day trying to figure this out and got it all in like 5 minutes. Thank you!

Honey said...

Please,please do help us with textures for the sphere.How to calculate texture coordinates for the same,etc?

Killian said...

Hi Jeff,

I'm a confirmed OpenGL programmer and I just started to develop for iPhone / iPad platforms using OpenGL ES and Objective-C.

Your tutorials helped me a lot, especially this one about texture mapping. Like I said, I'm pretty used to OpenGL so that was more the image-loading code I was interested in, and it works well!

Concerning the OpenGL stuff however, I was puzzled by some of the texture-loading code, where you enable GL_TEXTURE_2D, GL_BLEND and set the blending function.

I thought it was maybe an OpenGL ES specificity, but after some testing I can confirm that these line are NOT necessary: enabling GL_TEXTURE_2D is only necessary when drawing, not when loading a texture. And blending, well it doesn't have to do with texture-mapping, you can leave it off the whole time and your textured quad will show up properly anyway.

Maybe I missed something here, but it works for me as I've just described, and I thought it could be less confusing for OpenGL newcomers not to mix texturing and blending.

Thanks again for your tutorials!

Best regards,


P.S: as for non-power of two textures, I've implemented a code that resizes them on the fly if the provided images are NPOT. It works well so, if someone's interested I can post-it.

Edwin said...

scrub m65 kamagra attorney lawyer body scrub field jacket lovegra marijuana attorney injury lawyer

Travis said...

I would like to see that conversion code Killian.

Killian said...

Hi Travis,

given that you already have functions to determine whether a given number is Pow2, and if not, to find the nearest Pow2 (I can also give them to you, but they are pretty easy to find on the net), you will have calculated the new desired "Pow2" dimensions.

Then, you can do the following, where "image" is the original, non-Pow2 image:

CGSize newSize = CGSizeMake(pow2Width, pow2Height);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];

UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();

[image release];
[newImage retain];
image = newImage;

antywong said...

The rounded shape of speedy 30 features textile fake louis vuitton lining and leather trimmings with shiny Louis Vuitton Monogram ldylle Romance Encre golden brass. Sized at 11.8" x 8.3" x 6.7", the large capacity Hermes Original Python Birkin 30 Grey of this bag is enough for handbags review daily essentials; you can put bags wholesale everything into this city bag. It also fits for Hermes Clemence Jypsiere 34 Purple every occasion and perfectly goes with any outfits mfakng100910.

Mark said...

To use lighting and textures, do I still need to calculate the normals for vertices?

marc-w-abel said...

I credited your blog in some source code dealing with the texture y coordinate thing. Many have written on this subject, but few have written clearly.

h4ns said...

I was very encouraged to find this site. I wanted to thank you for this special read. I definitely savored every little bit of it and I have you bookmarked to check out new stuff you post.

AC Milan vs Lazio Live Streaming
West Bromwich Albion vs Wigan Athletic Live Streaming
Manchester United vs Aston Villa Live Streaming
Sunderland vs Chelsea Live Streaming
Arsenal vs Everton Live Streaming
Augsburg vs Bochum Live Streaming
Racing Santander vs Valencia Live Streaming
Frosinone vs Atalanta Live Streaming
AC Milan vs Lazio Live Streaming
West Bromwich Albion vs Wigan Athletic Live Streaming
Manchester United vs Aston Villa Live Streaming
Sunderland vs Chelsea Live Streaming
Arsenal vs Everton Live Streaming
Augsburg vs Bochum Live Streaming
Racing Santander vs Valencia Live Streaming
Frosinone vs Atalanta Live Streaming
Technology News | Hot News Today | Live Stream

Patrick said...

Great tutorial! This is really helping me get a good start in OpenGL ES. I have a question though, I'm writing a 2D iPhone game in OpenGL, and I'm wanting to use 1 square for the floor. I have the square in place, but how can I use multiple textures to cover the area? I have a texture for dirt, grass, and water. How can I draw 30 of these squares onto the floor plane while keeping each texture in a separate file? Is that possible, or do I have to make each floor map its own image, then swap out the texture as my player moves from screen to screen?

Mohit Garg said...

nice tutorial..helpd me alot
Thankyou :)

louis vuitton spring summer 2010 collection said...

Discover Louis Vuitton collections online: luggage, handbags, wallets, shoes ...

Ali said...

I need to disply image using texture , but I will need after that to zooming and panning it , aslo add toolbar to the GLViewController

how to do that , or is this applicable

dreau said...

This is so cool! Good job Jeff! This is really helpful...btw, is there something similar for filling up circles as well? thanks!

danl said...

I'm new to OpenGL and I'm finding these articles very useful.

A question: I'm trying to read a png the size of the screen and then let the user edit the image. Are textures the right way to do this? It seems they are intended to draw strokes or shapes. I've been unable to get the code working yet (just seeing a grey box), so I was wondering if there is another recommended way of manipulating the whole screen - especially given the size limits of textures.

CodeSlayer said...

This is great information, thanks !

I am having a bit of an issue. I would like to set a 2D png image as a background image. I set my verticies as follows :

// Vertices
static const Vertex3D vertices[] = {
{-1.0, 1.0, -0.0},
{ 1.0, 1.0, -0.0},
{-1.0, -1.0, -0.0},
{ 1.0, -1.0, -0.0}

// Normals
static const Vector3D normals[] = {
{0.0f, 0.0f, 1.0f},
{0.0f, 0.0f, 1.0f},
{0.0f, 0.0f, 1.0f},
{0.0f, 0.0f, 1.0f}

// Tex Coords
static const GLfloat texCoords[] = {
0.1875f, 0.03125f, // image is 320x480, but texture is 512x512. Image is offset by 96x32
0.1875f, 0.96875,
0.8125f, 0.03125f,
0.8125f, 0.96875

and then scale the image with:

glScalef(320.0f / 512.0f, 480.0f / 512.0f, 1.0f);

... and all I get is a grey rectangle, smaller than the full screen size. Any ideas what I am doing wrong ?