There’s “no such method” to my madness, man…


So you’re writing a web application and it, like, needs to run on one of the many mobile devices invading society these days. You need to invoke a function or a method that doesn’t exist on the class you’re dealing with. You won’t know the name or arguments of the function until the Jaavscript begins executing on the mobile device. Hi, I’m Cliff. You’re here because you can’t predict your functions and methods in advance of coding your application. I’m here because I wanted to post a really quick tip for those of you dealing with crazy Javascript on a modern browser.

Ruby, Python, Groovy, and even Objective-C have a menas of catching and interpreting functions or methods that have not yet been defined. It’s one of those new cool-kid programming tricks that these languages employ which allow developers to define programming languages that take the shape of the problem they are trying to solve. (See write-ups on BDD, and DSL to understand further.) Javascript, historically, has lacked such features. Notice I say historically… I’ve been studying just such a feature enabled by a magic method called __noSuchMethod_. I’ll be spending the next few days understanding its feasibility and limitations on a mobile web browser. That along with the Java to Javascript bridge feature in Android. Check it out and you can thank me later…

Make an iPhone ringtone!


I’ve been away from XCode and iOS for some time, doing my bid in Android land. (I’ve also seriously neglected my blog, apologies to both of my loyal readers, I know there’s only two of you!) The interesting thing is how much you realize you knew when you go back. Even more interesting is how much you forgot and has much changes in a brief matter of months. I overheard some of my former iOS coworkers talking about lsd errors and thought they were taking drugs. At any rate, today’s topic is a simple tip that I keep forgetting.

Ringtones for iPhones!
This is a two step process involving conversion and consumption. Ringtones must be in the m4a file format but use a “.m4r” extension. The easiest way make them is to use “afconvert” to convert any existing audio file to “m4a” format using this syntax:

afconvert -v -f m4af ./MyInputFile.wav ./MyOutputFile.m4r

The -v flag allows verbose output which let’s you see what the command is doing. The -f is a way of specifying the output format. Consumption of these ringtones is as simple as dragging and dropping the “.m4r” file into iTunes under the ringtone section so that the next time you synch your device it will include the new ring tone. You may also need to select your device in iTunes, click on the “Tones” tab and enable ringtone sync in iTunes for this to work.

OCMock Woes


So I’m building and running tests and banging my head against oncoming traffic while trying to get the OCMock framework to do something… I dunno… mocky? I’m running Xcode 4.2 build (4D199) on Lion. Dragging and dropping the freshly downloaded “.framework” bundle doesn’t want to link. Hi, I’m Cliff and you’re here because you have linker errors in your XC4 project while trying to get objects to pretend to be other objects. I’m here to try to help you work through some of the issues.

So I try building the static lib and forcing it. The first roadblock I hit was with the ${BUILD_STYLE} environment no longer being set. ${BUILD_STYLE} is changed to ${CONFIGURATION} in Xcode 4. The second bump I hit was with categories not loading from the static lib I generated. Setting “-ObjC” under “Other Linker Flags” solved this. Next I get OCMock to load but auto complete doesn’t work with the category methods added to “id”. (Eg. stub, expect, verify, etc.) I accidentally set my “Header Search Path” incorrectly. You have to set it to point to the folder that contains the OCMock folder with the header files in it, then you can successfully import in your test cases. Finally auto-complete works and things load but setting simple expectations on a mocked string appears to be broken. I’m now wondering if I should download source from the git repo instead of building the source included in the dmg. It’s late and I’m incredibly frustrated with not being able to write a single test all day…

Using Siri Effectively


I was watching this argument between a man and his wife happen with Siri trapped in the middle when it dawned on me. I still didn’t know how to get Siri to “tell my wife” about things happening throughout the day. Hi, I’m Cliff. You’re here because Siri doesn’t know about your spouse or significant other. Don’t worry, it’s not difficult to teach her about the individual members of your family. I had much more trouble trying to get her to understand how I pronounce my wife’s name since it is not spelled the way it sounds. I cover all of this in today’s article.

To start, I went into edit mode on my Wife’s contact card and started fiddling. Eventually I noticed the “add field” button. Tapping this brought up a host of different things I could associate with a contact, most notably the “Related People” button. Tapping on “Related people” adds a custom relationship to a contact. I assigned a spouse relationship to her contact and linked it to my contact info. That got me thinking. How does Siri know who I am? After all, my contact info is just another card in my address book. There is no “me” card. I backed out of the address book and went into settings under general and found the Siri settings.

Under Settings -> General -> Siri I found the “My Info” button. Tapping this allowed me to link the contact card with my name to Siri which, in short, teaches her who I am. After that I went back into the contact card for my wife and found some more custom fields. The “Phonetic First Name” was particularly interesting since we (Siri and I) seemed to struggle a bit here. After I spelled things phonetically we started to get along much better. This should put an end to the disagreements we have on Hwy 101 when I say, “Tell Altaress I’m stuck in traffic and it’ll be while before I can get you.” Where Siri responds,

“Clifton, I don’t understand who ‘Ouch Therese’ is. I need a name, number or contact to send your message…”

Re-sign your iPhone release builds


I have a quick post for tonight that I don’t want to lose. The problem I was trying to solve about a month ago regarded testing my iOS release builds that had been signed for the AppStore distribution. Our QE guy, a bright fellow, taught me how to re-sign the package with my development certificate so that it could be run on the device.

codesign -f -s “iPhone Developer: Clifton Craig (DUWQRS4KP8BM9)” -vv MyCoolProduct.app

Skype Video from iPhone to Windows 7 PC


I just did my first Skype video from my iPhone to my wife’s Windows 7 PC! Lady’s and gentleman, we live in an exciting time. Not only can we do video over 3G outside of Apple’s “Wifi restricted” Facetime walled garden, but we can do video to other Skype clients running on desktops and laptops.

The other day I went out shopping, as I usually do. (Forgive the random insertion of story prose, I’m leading to an eventual point.) I usually have a list composed by my wife. Typically this list sits on the kitchen table as I shop, depending on my mental carbon copy of said list. My wife has her usual way of requiring specific items from the store, for instance a specific flavor of tea and a specific brand of makeup. For most items I can make due with my mental list copy but for “women” products like hair creams, oils, face powder, etc., I need specific item serial and/or model numbers to guarantee my return with the correct product. I know this. The wife knows this. (By the way, don’t act like its just me shopping for wife stuff! A whole lot of you are probably reading this on your phone right now in the 15 item or less aisle holding a box of pads!) My venture out into the grocery wild usually ends up in my return to the store to purchase products similar to what was already purchased but present on the gnarled list which was un-crumpled after rebounding from my head. that’s what typically happens.

Like I was saying, the other day I was out shopping as usual. My list was in the usual place and I was performing grey matter scans for random products as I travelled through the store. This time I had a plan! I would contact my wife and attempt to use my iPhone to protect my skull from flying paper upon my return home. That is I would use technology on my phone to prevent the multiple trips to the store. My plan was such that I would photograph various attempts to purchase incorrect products and send them homeward for verification. I thought hard about my idea. Maybe I could market it. “Shopping over IP” I thought. SOIP? I would make millions so long as my prototype worked without flaws.

As fate would have it, the idea had its flaws. It had holes in places I hadn’t anticipated. the first problem was time. I takes too long to photograph each item out of a list of 20 or so and compose individual emails of each one. I could try SMS, but that results in compression of photos so they could be rendered on her, then smaller Blackberry display. (We recently upgraded her to an iPhone.) Compression was unacceptable in the case of similar but vastly different in women’s eyes, products. Every detail is important and I needed to milk each pixel out of my phone’s superior camera! The other hole was that in order to see the photos my wife needed to be at her computer. Because she works literally two jobs I knew she would be in bed napping and in no mood to visit her laptop in the study to verify something she clearly identified on the list glued to the syrup stains left from the kids’ morning breakfast. Last major hole was that my wife is a cosmetologist. While I would do backflips for anything presented to me technologically, she has a certain temperament which results in little patience when things require a log in, or a mouse click, or any form of virtual-ness. I imagined our virtual conversation.

“Look honey! It’s like you’re virtually in the store next to me! Which color hair dye was that again???”

“What did I put on the damned list???!!!”

…It was’t pretty, even in the virtual sense. Alas, SOIP would probably have to wait until version 5.0 when all the kinks were worked out before it went public. I broke down that day and eventually used a traditional telephone call. We talked me through the physical aisle rather than her holding my hand next to virtual product. It sucked. If only I could Facetime with her. I thought about Apple’s recent Facetime beta that I had installed on my Mac. Maybe I could have used that instead? That would have even more holes because I wasn’t sure if my crappy router was dropping connection from the mac as it is prone to do and also I didn’t have wifi in this particular store. Also it would mean she would have to log into my mac which was in yet another room requiring her master yet another set of technical knowledge. “Where’s the start menu? What’s this weird looking blue face thing in the lower left corner? What kinda’ Facetime?? Which icon???”

Today, thanks to Skype, I have an answer! I was so excited to see the update that I have to admit something. I can’t wait to go shopping! Maybe it’s just me, but the thrill of, “No worries, dear! I’ll pick up those hair products for you!” now gives me a thrill. I can’t wait to cut on my camera in Skype and literally point to the wrong product saying, “See! Its shelved under the wrong tag! That’s why I keep picking it up!!!” (Why is it that women always assume us men are incompetent? It’s clearly the store clerks that label face cream as face powder!) With Skype I can see my SOIP prototype taking off being adopted by the masses! remember where you heard it first and I will expect an interview from Matt Lauer on the Today Show sometime in the near future.

(The above prose is merely a fictitious example of random shopping experimentation. Any similarity with these stories and real life examples from the author is purely coincidental. No trees have been harmed in the making of this story though one husband may have received minor paper cuts. As always, the examples posted here are done so in humor. don’t take anything too seriously! however, the author does retain any and all claims of originality of the SOIP concept!)

Speex On iPhone Explained Part II


*Update*
From part I I neglected to point out that you should un comment #define _USE_SSE in the config.h as mentioned below. This preproc directive will allow you to run on device. It was also mentioned that you could get more speed out of Speex if you #define FIXED_POINT instead of FLOATING_POINT. I have not verified this and Speex runs acceptable in my implementation without it but its worth mentioning.
*Update*

You have a lot of vocal audio data. Maybe it needs to be stored on an iPhone. Maybe it needs to glide effortlessly over the network like a slice of dental floss blowing in the wind. Whatever the case, you need a good compressor and Speex is a great solution. Hi, I’m Cliff. You’re here because you’re looking for an answer to your audio compression needs. I’m here to deliver the secrets to decompressing audio with the Speex codec. That, for what it’s worth, is the only reason I’m still hanging around here. In any other event you’d probably find me on South Street sharing a soda with a cat. I digress . . .

In part I of this series I explained how to get Speex to compile. Today we’ll try to import the OGG container format into our project and move onto Speex decompression. Because not everyone may be aware, a brief explanation on codecs and containers is in order. Audio encoding is typically made of two distinct pieces. You would usually have a container format and an encoding. The audio container holds the meta data, or descriptive information, for the actual audio file. This meta or descriptive information includes things like how many channels are in the audio data, what sample rate it is recorded at, and the endianness (not Indianness) of the audio data. There are other potential data held in the container format depending on the type of encoding you use. Finally, the descriptive (meta) data will have the location (offset) of the actual audio data in the file. The encoding is the actual raw audio data that is to be delivered directly to the output device or speakers. The encoding can be a direct digital dump (that is the actual audio samples taken over time as the audio was recorded) or it can be a compressed variant of the raw samples. It’s important to note that the encoding and the container are not usually dependent upon one another. That mean you can mix and match Speex encoding with a wave container format just the same as you can put raw uncompressed samples in an OGG container. It’s just more common to find uncompressed data in a wave container and Speex compressed audio in an OGG container.

Let’s take a step back and try some TDD. Following best practices, we need to create a need for the Speex codec and the OGG container. I realize this is cart before the horse style since we’ve already imported Speex but bear with me as I’m doing this tutorial on my time off. Also up until now I’ve been completely out of the TDD habit for a while as I strive to work closely with others who are uncomfortable with the style. We start by creating a “Unit Test Bundle” target in the project. Create a new objective C class named “CCCSpeexDecoderTest” using the “New File…” dialog and do not choose (unselect) the “also create .h file” option. Include the following in your new Objective-C class file.

//
//  CCCSpeexDecoderTest.m
//  SpeexLib
//
//  Created by Clifton Craig on 11/13/10.
//  Copyright 2010 Craig Corporation. All rights reserved.
//
#include <SenTestingKit/SenTestingKit.h>

@interface CCCSpeexDecoderTest : SenTestCase
{
	NSString* wavFile;
	NSString* speexFile;
}
@end

@implementation CCCSpeexDecoderTest

-(void) setUp
{
	wavFile = [[NSBundle bundleForClass:[self class]] pathForResource:@"sample" ofType:@"wav"];
	speexFile = [[NSBundle bundleForClass:[self class]] pathForResource:@"sample" ofType:@"spx"];
}

-(void) testFirstTest
{
	STAssertNotNil(wavFile,@"A sample wav file is required to test speex decoding.");
	STAssertNotNil(speexFile,@"A sample speex file is required to test speex decoding.");
}

@end

Running this tells us that we’re going to need some speex data to operate on. (I’ve taken the liberty to generate a wav file using the “say” command and converted it to a Speex encoded file using the JSpeex API via Groovy. I’ll include both in a download of the project for this lesson.) Next we’ll create a structure to hold our unit tests and test resources. We will be following the “golden copy” testing pattern. You later learn that using the pattern here is rather fragile, however a more purist approach would take us through an exercise of re-writing the entire Speex project which is outside the scope of my tutorial. Using Finder, I created a “Tests” and a “Resources” folder under my src folder in my project. Drag/drop these folders into XCode to create the corresponding groups. Then drag/drop the sample wave and sample speex files (named “sample.wav” and “sample.spx” respectively) into the “Resources” group in XCode. Running the test will now pass.

We now work our way through creating the decoder. I’ll spare the individual steps in TDD as it would make this text overly verbose and I’ll try to summarize instead. We need an actual decoder instance which we’ll be importing. TDD suggests we import what we don’t have so add the import for a CCCSpeexDecoder type which does not exist. Build and fail. (The failure is important as it formalizes the class or feature you are about to add/change/delete.) We also need to be able to create this type and give it some audio to decode. It will also need a place to send the decoded audio data. I’m going to define an abstraction for providing/receiving the audio data so that we don’t necessarily need a file system so I’m adding a test to demonstrate/document the need for an audio source, a test to demonstrate/document the need for an audio sink, and one other test that formalizes how we plug these two abstractions into the decoder.

#import "CCCSpeexDecoder.h"

@interface CCCSpeexDecoderTest : SenTestCase <CCCSpeexAudioSource, CCCSpeexAudioSink>
{
	NSString* wavFile;
	NSString* speexFile;
	CCCSpeexDecoder *decoder;
}
@end

@implementation CCCSpeexDecoderTest

//...

-(void) testAudioSourceIsDefined
{
	id<CCCSpeexAudioSource> anAudioSource = self;
}

-(void) testAudioSinkIsDefined
{
	id<CCCSpeexAudioSink> anAudioSink = self;
}

-(void) testCanCreateDecoder
{
	[[CCCSpeexDecoder alloc] initWithAudioSource:self andAudioSink:self];
}

And this calls for the additional CCCSpeexDecoder class which defines the abstractions…

#import <Foundation/Foundation.h>

@protocol CCCSpeexAudioSource

@end

@protocol CCCSpeexAudioSink

@end

@interface CCCSpeexDecoder : NSObject {

}

- (id) initWithAudioSource:(id<CCCSpeexAudioSource>) anAudioSource andAudioSink:(id<CCCSpeexAudioSink>) anAudioSink;

@end

#import "CCCSpeexDecoder.h"

@implementation CCCSpeexDecoder

- (id) initWithAudioSource:(id<CCCSpeexAudioSource>) anAudioSource andAudioSink:(id<CCCSpeexAudioSink>) anAudioSink
{
	self = [super init];
	if (self != nil) {

	}
	return self;
}

@end

Now we go back and add one more test that explains what we’re after.

-(void) testCanDecode
{
	[decoder decodeAudio];
}

Build and fail so that we know to define the method.

-(void) decodeAudio
{
}

We now have defined the ability to decode audio. We have to set our expectation for this method. (Test first begins with declaring or expressing a need for a feature or function then setting an expectation for its behavior.) After invoking decodeAudio we would expect to have collected the decoded audio bytes somewhere. I’ll add a mutable data fieldin the test for this.

@interface CCCSpeexDecoderTest : SenTestCase <CCCSpeexAudioSource, CCCSpeexAudioSink>
{
	NSString* wavFile;
	NSString* speexFile;
	CCCSpeexDecoder *decoder;
	NSMutableData *decodedAudio;
}
@end

@implementation CCCSpeexDecoderTest

-(void) setUp
{
	wavFile = [[NSBundle bundleForClass:[self class]] pathForResource:@"sample" ofType:@"wav"];
	speexFile = [[NSBundle bundleForClass:[self class]] pathForResource:@"sample" ofType:@"spx"];
	decoder = [[CCCSpeexDecoder alloc] init];
	decodedAudio = [[NSMutableData alloc] init];
}

And we add a test to exercise the method and document/verify our expectation:

-(void) testDecodeAudioFillsDecodedAudio
{
	STAssertTrue([decodedAudio length] == 0, @"Should NOT have accumulated data");
	[decoder decodeAudio];
	STAssertTrue([decodedAudio length] > 0, @"Should have accumulated data");
}

Here’s the Oogly part. We are calling a method with no return value. We’ve defined an abstraction around collecting data (an audio sink) and we’ve made our test case adopt the protocol for this abstraction. The protocol defines no methods. The test calls for data to magically arrive in the mutable data field. Indirectly, our test is stating that given a source and a sink, when the decodeAudio message is sent we should have accumulated data in the sink. running the test fails because we haven’t added the functionality. We step into the decodeAudio implementation and fill in the simplest thing that works.

-(void) decodeAudio
{
	NSString *pretendData = @"pretendData";
	[audioSink audioWasDecoded:
		[NSData dataWithBytes:[pretendData cStringUsingEncoding:NSUTF8StringEncoding] length:[pretendData length]]
	 ];
}

You see we are talking to an audioSink object here. Because we don’t really have an audiosink object in scope (I just made it up b/c it felt right) we need to declare it.

@interface CCCSpeexDecoder : NSObject {
	id<CCCSpeexAudioSink> audioSink;
}

If we run we still won’t get satisfaction because we haven’t ensured that the audiosink given in the constructor is the one we talk to when we decode audio. So we revisit the init method.

- (id) initWithAudioSource:(id<CCCSpeexAudioSource>) anAudioSource andAudioSink:(id<CCCSpeexAudioSink>) anAudioSink
{
	self = [super init];
	if (self != nil) {
		audioSink = [anAudioSink retain];
	}
	return self;
}

We also need to release in our dealloc.

- (void) dealloc
{
	[audioSink release];
	[super dealloc];
}

Let’s be more specific. When decoding audio we will want to discover the meta data or attributes of the audio. This information is usually the first group of bytes in a file and it explains what the rest of the file contains. We’ll declare an expectation to receive a callback in our sink which contains the meta data in an easily navigable NSDictionary.

-(void) testDecodeAudioReturnsHeaderInfoToSink
{
	STAssertNil(headerInfo, @"We should start with no header info.");
	[decoder decodeAudio];
	STAssertNotNil(headerInfo, @"We should now have header info.");
}

and we need to add an NSDictionary field to our test to record the header info.

@interface CCCSpeexDecoderTest : SenTestCase <CCCSpeexAudioSource, CCCSpeexAudioSink>
{
        //Other fields...
	NSDictionary *headerInfo;
}
@end

we add the simplest thing that will work.

-(void) decodeAudio
{
	NSString *pretendData = @"pretendData";
	[audioSink headerWasDecoded:[NSDictionary dictionary]];
	[audioSink audioWasDecoded:
		[NSData dataWithBytes:[pretendData cStringUsingEncoding:NSUTF8StringEncoding] length:[pretendData length]]
	 ];
}

…And this calls for an additional method in our AudioSink protocol.

@protocol CCCSpeexAudioSink <NSObject>

-(void) audioWasDecoded:(NSData*) someDecodedAudio;
-(void) headerWasDecoded:(NSDictionary*) theAudioAttributes;
@end

Which bleed back into the test where we store the attibutes as our header info. Add the following to the test case.

-(void) headerWasDecoded:(NSDictionary*) theAudioAttributes
{
	headerInfo = theAudioAttributes;
}

Now we’ll look at individual attributes given to the sink during the parse. We set some expectations for numeric values mapped to specific keys in the header info.

-(void) testDecodeAudioHeaderInfoIncludesSpecificValues
{
	[decoder decodeAudio];
	NSNumber *value = [headerInfo valueForKey:@"sampleRate"];
	STAssertNotNil(value, @"Should have returned a number");
	value = [headerInfo valueForKey:@"frameSize"];
	STAssertNotNil(value, @"Should have returned a number");
	value = [headerInfo valueForKey:@"numberOfChannels"];
	STAssertNotNil(value, @"Should have returned a number");
	value = [headerInfo valueForKey:@"decodeBlockSize"];
	STAssertNotNil(value, @"Should have returned a number");
	value = [headerInfo valueForKey:@"framesPerPacket"];
	STAssertNotNil(value, @"Should have returned a number");
}

And as you’ll note a pattern here we should do some refactoring.

-(void) assertNumericValueInDictionary:(NSDictionary*)aDictionary atKey:(NSString*)aKey
{
	NSNumber *value = [headerInfo valueForKey:aKey];
	STAssertNotNil(value, @"Should have returned a number");
}

-(void) testDecodeAudioHeaderInfoIncludesSpecificValues
{
	[decoder decodeAudio];
	[self assertNumericValueInDictionary:headerInfo atKey:@"sampleRate"];
	[self assertNumericValueInDictionary:headerInfo atKey:@"frameSize"];
	[self assertNumericValueInDictionary:headerInfo atKey:@"numberOfChannels"];
	[self assertNumericValueInDictionary:headerInfo atKey:@"decodeBlockSize"];
	[self assertNumericValueInDictionary:headerInfo atKey:@"framesPerPacket"];
}

Because I forget the attributes of the file provided I’m going to use a discovery test technique. With this technique we use a dummy expected value in our assert and allow the assertion error message tell us what the actual value is. I wouldn’t do this in normal testing. It’s only because I already have working code that I’m plugging in and because this tutorial is getting wordy that I’m going to take the cheap way out.

-(void) assertIntValue:(int)anInt isInDictionary:(NSDictionary*)aDictionary atKey:(NSString*)aKey
{
	NSNumber *value = [headerInfo valueForKey:aKey];
	STAssertNotNil(value, @"Should have returned a number");
	STAssertEquals([value intValue], anInt, @"Integer value %i should exist for key %@", anInt, aKey);
}

-(void) testDecodeAudioHeaderInfoIncludesSpecificValues
{
	[decoder decodeAudio];
	[self assertIntValue:-999 isInDictionary:headerInfo atKey:@"sampleRate"];
	[self assertIntValue:-999 isInDictionary:headerInfo atKey:@"frameSize"];
	[self assertIntValue:-999 isInDictionary:headerInfo atKey:@"numberOfChannels"];
	[self assertIntValue:-999 isInDictionary:headerInfo atKey:@"decodeBlockSize"];
	[self assertIntValue:-999 isInDictionary:headerInfo atKey:@"framesPerPacket"];
}

Once we implement the actual parsing logic we will start to see the actual values reported in the assertion errors. (I am adapting existing working code rather than developing the code from test cases.) We will pull the values from the errors back into the asserts to make the test pass and document what our expectations actually are.

Now we need to actually start pulling audio from our audio source abstraction. Because we used protocols, our test can pose (using the self-shunt pattern) as the audio source and provide data for the decoder. We step into the decoder and start doing some actual parsing.

-(void) decodeHeader
{
	[audioSink headerWasDecoded:[NSDictionary dictionary]];
}

-(void) decodeAudio
{
	NSString *pretendData = @"pretendData";
	[self decodeHeader];
	[audioSink audioWasDecoded:
		[NSData dataWithBytes:[pretendData cStringUsingEncoding:NSUTF8StringEncoding] length:[pretendData length]]
	 ];
}

Importing OGG

At this point we have to import OGG for decoding the container so we can read the file meta data. Download and unpack libogg (not liboggz) from the Xiph.org download site.

We need to add the ogg header files to the header search path, so drag/drop the ogg folder from the include folder in the root of the unpacked directory into your XCode project. (/path/to/libogg-1.2.1/include/ogg) Choose to Copy the files from the dialog and select your static lib target before accepting the dialog. Delete the config_types.h.in and makefile.am and Makefile.in from this folder and group. (Also move them to trash.) Double click the project icon in the left tree pane and select the “Build” tab. Type “header search” in the search box at the top to narrow the options to the header search path. You need to add, “$(SRCROOT)” as one of your header search path values here. Create an XCode group for the ogg source code and drag/drop the “bitwise.c” and “framing.c” files from the unpacked libogg source folder. (/path/to/libogg-1.2.1/src).

At this point building unit test target should leave you with errors from the latest round of header info asserts which we will fix in the next part of the series. We have a fully configured project with access to both the speex and ogg encoding/decoding APIs which is exciting. In the next part of the series we will tackle calling into these APIs to decode the data. I’m going to upload my part II example project to my box account so it will be in the right and pane for your downloading pleasure. Until next time…

(Some of you will have noticed I accidentally published this post the other day before finishing it. This is why I’m publishing it half baked tonight. There’s alot here and a lot more to cover. Keep checking back for updates!)