OpenGL ES for iOS

I’m working on an application that needs to use OpenGL ES v2.0 on iOS and Android. And one problem I’m running into is getting a good in-depth discussion of OpenGL ES shaders on iOS: one book I have covers shaders well but sample code was written for Microsoft Windows. Another book was rated highly on Amazon–but the discussion seems to be geared to people who have never worked with OpenGL or with iOS before.

The basic problem I’m running into is getting a basic OpenGL ES app running. I finally have something, so I’m posting it here for future reference, and in case it works well for other people.

This basically marries the OpenGL ES sample code from both books; incorporating shaders from one with the iOS base of the other.

This relies on GLKit; at this point, with iOS 6 on most devices and iOS 5 on most of the rest, there is no reason not to use GLKit. I’m only using GLKView, however; the types of applications I’m working on do not require constant rendering (like a OpenGL game), so I’m not using GLKViewController, which provides a timer loop which constantly renders frames for continuous smooth animation. (To plug in GLKViewController you just change GSViewController’s parent to GLKViewController, and remove the delegate assignment to self.view in viewDidLoad.

Also note I’m releasing resources on viewDidDisappear rather than on viewDidUnload; iOS 6 deprecates viewDidUnload.

GSViewController nib

This is actually very simple: the GSViewController nib contains one view: a GLKView. Not posted here because it’s so simple.

Note if you have other views and you want to move the GLKView to a different location in the hierarchy, modify the GSViewController.m/h class to provide an outlet to the view.

GSViewController.h

//
//  GSViewController.h
//  TestOpenGL
//
//  Created by William Woody on 6/12/13.
//  Copyright (c) 2013 Glenview Software. All rights reserved.
//

#import 
#import 

@interface GSViewController : UIViewController 
{
	EAGLContext *context;
	GLuint vertexBufferID;
	
	GLuint programObject;
}

@end

This implements the basic example out of the book OpenGL ES 2.0 Programming Guide. Note, however, that instead of creating a ‘UserData’ object and storing that in an ‘ESContext’ (which isn’t on iOS AFAIK), instead, I keep the contents of the ‘UserData’ record (the programObject field), along with a reference to the EAGLContext (the ‘ESContext’ of iOS), and a reference to the vertex buffer I’m using.

GSViewController.m

//
//  GSViewController.m
//  TestOpenGL
//
//  Created by William Woody on 6/12/13.
//  Copyright (c) 2013 Glenview Software. All rights reserved.
//

#import "GSViewController.h"

typedef struct {
	GLKVector3 postiionCoords;
} SceneVertex;

static const SceneVertex vertices[] = {
	{ {  0.0f,  0.5f, 0.0f } },
	{ { -0.5f, -0.5f, 0.0f } },
	{ {  0.5f, -0.5f, 0.0f } }
};

@implementation GSViewController

GLuint LoadShader(GLenum type, const char *shaderSrc)
{
	GLuint shader;
	GLint compiled;
	
	shader = glCreateShader(type);
	if (shader == 0) return 0;
	
	glShaderSource(shader, 1, &shaderSrc, NULL);
	glCompileShader(shader);
	
	glGetShaderiv(shader, GL_COMPILE_STATUS, &compiled);
	if (!compiled) {
		GLint infoLen = 0;
		glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &infoLen);
		if (infoLen > 1) {
			char *infoLog = malloc(sizeof(char) * infoLen);
			glGetShaderInfoLog(shader, infoLen, NULL, infoLog);
			NSLog(@"Error compiling shader: %s",infoLog);
			free(infoLog);
		}
		glDeleteShader(shader);
		return 0;
	}
	return shader;
}

- (BOOL)internalInit
{
	const char vShaderStr[] =
		"attribute vec4 vPosition;                                           n"
		"void main()                                                         n"
		"{                                                                   n"
		"    gl_Position = vPosition;                                        n"
		"}                                                                   n";
	const char fShaderStr[] =
		"precision mediump float;                                            n"
		"void main()                                                         n"
		"{                                                                   n"
		"    gl_FragColor = vec4(1.0,0.0,0.0,1.0);                           n"
		"}                                                                   n";
		
	GLuint vertexShader;
	GLuint fragmentShader;
	GLint linked;
	
	vertexShader = LoadShader(GL_VERTEX_SHADER,vShaderStr);
	fragmentShader = LoadShader(GL_FRAGMENT_SHADER, fShaderStr);
	
	programObject = glCreateProgram();
	if (programObject == 0) return NO;
	
	glAttachShader(programObject, vertexShader);
	glAttachShader(programObject, fragmentShader);
	glBindAttribLocation(programObject, 0, "vPosition");
	glLinkProgram(programObject);
	
	glGetProgramiv(programObject, GL_COMPILE_STATUS, &linked);
	if (!linked) {
		GLint infoLen = 0;
		glGetProgramiv(programObject, GL_INFO_LOG_LENGTH, &infoLen);
		if (infoLen > 1) {
			char *infoLog = malloc(sizeof(char) * infoLen);
			glGetProgramInfoLog(programObject, infoLen, NULL, infoLog);
			NSLog(@"Error linking shader: %s",infoLog);
			free(infoLog);
		}
		glDeleteProgram(programObject);
		programObject = 0;
		return NO;
	}
	return YES;
}

- (void)viewDidLoad
{
	[super viewDidLoad];
	
	GLKView *view = (GLKView *)self.view;
	NSAssert([view isKindOfClass:[GLKView class]],@"View controller's view is not a GLKView");
	context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
	view.context = context;
	view.delegate = self;
	[EAGLContext setCurrentContext:context];
	
	glClearColor(0.0f,0.0f,0.0f,1.0f);
	
	[self internalInit];
	
	// Generate, bind and initialize contents of a buffer to be used in GLU memory
	glGenBuffers(1, &vertexBufferID);
	glBindBuffer(GL_ARRAY_BUFFER, vertexBufferID);
	glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
}

- (void)viewDidDisappear:(BOOL)animated
{
	[super viewDidDisappear:animated];
	
	GLKView *view = (GLKView *)self.view;
	[EAGLContext setCurrentContext:view.context];
	
	if (0 != vertexBufferID) {
		glDeleteBuffers(1, &vertexBufferID);
		vertexBufferID = 0;
	}
	
	view.context = nil;
	[EAGLContext setCurrentContext:nil];
	
	glDeleteProgram(programObject);
	programObject = 0;
	[context release];
	context = nil;
}

- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
	glClear(GL_COLOR_BUFFER_BIT);
	
	glUseProgram(programObject);
	
	glEnableVertexAttribArray(GLKVertexAttribPosition);
	glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, sizeof(SceneVertex), NULL + offsetof(SceneVertex, postiionCoords));
	glDrawArrays(GL_TRIANGLES, 0, 3);
}

@end

Note a few things. First, I’m setting up a GLKView for rendering; this is all handled in -viewDidLoad. I’m also setting up a vertex buffer in viewDidLoad; the OpenGL ES 2.0 Programming Guide example puts that initialization in Init() instead. The -viewDidLoad method also replaces some of the setup in the example’s main() method.

Also note that -(BOOL)internalInit replaces most of the rest of Init()’s functionality. Specifically we handle compiling the shaders and creating a program there.

I handle cleanup in -viewDidDisappear; keep in mind the example OpenGL ES application doesn’t do any cleanup. We do it here because our application may continue to run even after our view controller disappears, so we need to be a good citizen.

And our draw routine (glkView:drawInRect:) delegate doesn’t set up the viewport, nor does it need to call to swap buffers.


Yes, there are a lot of problems with this code. It’s a quick and dirty application that I’m using to understand shaders in OpenGL ES 2.0.

But I do get a triangle.

Building a static iOS Library

I’m using the instructions that I found here: iOS-Framework

But here are the places where things deviated thanks to Xcode 4.6:

(1) At Step 2: Create the Primary Framework Header, for some reason (I suspect because things got changed), it appears that specifying target membership for a header file no longer appears to work. From the notes folks seem to suggest using the Build Phases: “Copy Files” section to specify where and how to copy the files.

So what I’m doing is every publicly available header file, (a) make sure it’s inserted into the “Copy Files” list, and (b) make sure the destination is given as “Products Directory”, subpath include/${PRODUCT_NAME}

(2) At Step 3: Update the Public Headers Location, note the script in step 5 uses “${PUBLIC_HEADERS_FOLDER_PATH}” to specify where files are to be copied from. So in Step 3, we need to make sure the public headers folder path is set to something more reasonable.

In this step, set the public headers folder path (and also the private headers folder path just because to “include/${PRODUCT_NAME}”.

These changes get me to the point where I can build the framework after step 5.


There was one other hitch: you cannot include the framework project (in the later steps) into the dependent project while the framework project is still open.

Dragging and dropping objects in GWT

If you want to add a click-and-drag handler in GWT, so that (for example) if you click on an image object, you can move it around (and drag content logically associated with it), it’s fairly straight forward.

First, you need to implement a MouseDownEvent, a MosueMoveEvent and a MouseUpEvent handler, and attach them to your image. (Me, I like putting this into a single class, which contains the state associated with the dragging event.) Thus:

	Image myImage = ...
     
	EventHandler h = new EventHandler(myImage);
	myImage.addMouseDownHandler(h);
	myImage.addMouseMoveHandler(h);
	myImage.addMouseUpHandler(h);

Now the event handler needs to do some things beyond just tracking where the mouse was clicked, where it is being dragged to, and how the universe should be changed as the dragging operation takes place. We also need to trap the event so we can handle dragging outside of our object (by capturing drag events), and we also have to prevent the event from percolating upwards, so we get the dragging events rather than the browser.

This means that our event dragging class looks something like:

private class EventHandler implements MouseDownHandler, MouseMoveHandler, MouseUpHandler
{
	private boolean fIsClicked;
	private Widget fMyDragObject

	EventHandler(Widget w)
	{
		fMyDragObject = w;
	}
		
	@Override
	public void onMouseUp(MouseUpEvent event)
	{
		// Do other release operations or appropriate stuff

		// Release the capture on the focus, and clear the flag
		// indicating we're dragging
		fIsClicked = false;
		Event.releaseCapture(fMyDragObject.getElement());
	}

	@Override
	public void onMouseMove(MouseMoveEvent event)
	{
		// If mouse is not down, ignore
		if (!fIsClicked) return;

		// Do something useful here as we drag
	}

	@Override
	public void onMouseDown(MouseDownEvent event)
	{
		// Note mouse is down.
		fIsClicked = true;

		// Capture mouse and prevent event from going up
		event.preventDefault();
		Event.setCapture(fMyDragObject.getElement());

		// Initialize other state we need as we drag/drop
	}

A FlexTable that handles mouse events.

Reverse engineering the GWT event handler code to add new events is simple. Here’s a Flex Table which also handles mouse events:

	/**
	 * Internal flex table declaration that syncs mouse down/move/up events
	 */
	private static class MouseFlexTable extends FlexTable implements HasAllMouseHandlers
	{
		@Override
		public HandlerRegistration addMouseDownHandler(MouseDownHandler handler)
		{
			return addDomHandler(handler, MouseDownEvent.getType());
		}

		@Override
		public HandlerRegistration addMouseUpHandler(MouseUpHandler handler)
		{
			return addDomHandler(handler, MouseUpEvent.getType());
		}

		@Override
		public HandlerRegistration addMouseOutHandler(MouseOutHandler handler)
		{
			return addDomHandler(handler, MouseOutEvent.getType());
		}

		@Override
		public HandlerRegistration addMouseOverHandler(MouseOverHandler handler)
		{
			return addDomHandler(handler, MouseOverEvent.getType());
		}

		@Override
		public HandlerRegistration addMouseMoveHandler(MouseMoveHandler handler)
		{
			return addDomHandler(handler, MouseMoveEvent.getType());
		}

		@Override
		public HandlerRegistration addMouseWheelHandler(MouseWheelHandler handler)
		{
			return addDomHandler(handler, MouseWheelEvent.getType());
		}
	}

Addendum:

If you want the location (the cell) in which the mouse event happened, you can extend MouseFlexTable with the additional methods:

		public static class HTMLCell
		{
			private final int row;
			private final int col;
			
			private HTMLCell(int r, int c)
			{
				row = r;
				col = c;
			}
			
			public int getRow()
			{
				return row;
			}
			
			public int getCol()
			{
				return col;
			}
		}
		
		public HTMLCell getHTMLCellForEvent(MouseEvent event) 
		{
			Element td = getEventTargetCell(Event.as(event.getNativeEvent()));
			if (td == null) {
				return null;
			}

			int row = TableRowElement.as(td.getParentElement()).getSectionRowIndex();
			int column = TableCellElement.as(td).getCellIndex();
			return new HTMLCell(row, column);
		}

GWT and tall images revisited.

As a follow-up to Things to Remember: Why cells with an inserted image are taller than the image in GWT, the answer is:

VerticalPanel panel;
...
Image image = new Image("images/mydot.png");
DOM.setStyleAttribute(image.getElement(), "display", "block");
panel.add(image);

For whatever reason, the image object has ‘inline’ formatting by default, and when GWT assembles the table cell, the cell’s height is being derived from the font height rather than from the image height. Setting the image to block seems to resolve this issue.

iPhone Multitouch

When handling touch events, the events you get via the touch events in the UIView class is pretty much the raw events from the hardware, wrapped in Objective C objects. If you’re dragging around one finger, then while keeping that finger down touch with a second, you get a new touchesBegan event. Lift the first finger but keep the second, and you get a touchesEnded event for the first finger, but the move events for the second continue.

I suppose this is what one would expect. But it means that if you drag with two fingers, you probably are going to receive touchesMoved events for each finger separately, rather than receiving a touchesMoved event for both fingers at the same time.

This implies that if you want to track all the fingers moving around at the same time, you need to maintain the state of affairs; that is, you need to keep track of the touch events that haven’t moved as well as the ones that have moved.

Here is some code I wrote which keep the current touch events in a secondary set, so I can see and track all the fingers as they move. This isn’t the best code in the world, but it does prove the concept.

@implementation TestTouch

- (id)initWithCoder:(NSCoder *)aDecoder
{
    self = [super initWithCoder:aDecoder];
    if (self) {
		self.multipleTouchEnabled = YES;
		set = [[NSMutableSet alloc] initWithCapacity:10];
    }
    return self;
}

- (void)drawRect:(CGRect)rect
{
	[[UIColor whiteColor] setFill];
	UIRectFill(rect);
	
	[[UIColor blackColor] setFill];
	for (UITouch *t in set) {
		CGRect r;
		CGPoint pt = [t locationInView:self];
		r.origin.x = pt.x - 22;
		r.origin.y = pt.y - 22;
		r.size.width = 44;
		r.size.height = 44;
		UIRectFill(r);
	}
}

- (void)dealloc
{
	[set release];
    [super dealloc];
}

- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
	for (UITouch *t in touches) {
		[set addObject:t];
	}
	[self setNeedsDisplay];
}

- (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event
{
	[set removeAllObjects];
	[self setNeedsDisplay];
}

- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
	for (UITouch *t in touches) {
		[set removeObject:t];
	}
	[self setNeedsDisplay];
}

- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
	[self setNeedsDisplay];
}

@end

If you need to capture when all the touches start and when they all end, you can do this by testing if the set (defined as NSMutableSet in the class) is empty.

For whatever reason I want to think that the touches set passed is full of all of the fingers, but no–you only get the finger that changed, not the fingers that stayed still or didn’t change. Thus, you need a set to capture them all.