Speex On iPhone Explained Part II


*Update*
From part I I neglected to point out that you should un comment #define _USE_SSE in the config.h as mentioned below. This preproc directive will allow you to run on device. It was also mentioned that you could get more speed out of Speex if you #define FIXED_POINT instead of FLOATING_POINT. I have not verified this and Speex runs acceptable in my implementation without it but its worth mentioning.
*Update*

You have a lot of vocal audio data. Maybe it needs to be stored on an iPhone. Maybe it needs to glide effortlessly over the network like a slice of dental floss blowing in the wind. Whatever the case, you need a good compressor and Speex is a great solution. Hi, I’m Cliff. You’re here because you’re looking for an answer to your audio compression needs. I’m here to deliver the secrets to decompressing audio with the Speex codec. That, for what it’s worth, is the only reason I’m still hanging around here. In any other event you’d probably find me on South Street sharing a soda with a cat. I digress . . .

In part I of this series I explained how to get Speex to compile. Today we’ll try to import the OGG container format into our project and move onto Speex decompression. Because not everyone may be aware, a brief explanation on codecs and containers is in order. Audio encoding is typically made of two distinct pieces. You would usually have a container format and an encoding. The audio container holds the meta data, or descriptive information, for the actual audio file. This meta or descriptive information includes things like how many channels are in the audio data, what sample rate it is recorded at, and the endianness (not Indianness) of the audio data. There are other potential data held in the container format depending on the type of encoding you use. Finally, the descriptive (meta) data will have the location (offset) of the actual audio data in the file. The encoding is the actual raw audio data that is to be delivered directly to the output device or speakers. The encoding can be a direct digital dump (that is the actual audio samples taken over time as the audio was recorded) or it can be a compressed variant of the raw samples. It’s important to note that the encoding and the container are not usually dependent upon one another. That mean you can mix and match Speex encoding with a wave container format just the same as you can put raw uncompressed samples in an OGG container. It’s just more common to find uncompressed data in a wave container and Speex compressed audio in an OGG container.

Let’s take a step back and try some TDD. Following best practices, we need to create a need for the Speex codec and the OGG container. I realize this is cart before the horse style since we’ve already imported Speex but bear with me as I’m doing this tutorial on my time off. Also up until now I’ve been completely out of the TDD habit for a while as I strive to work closely with others who are uncomfortable with the style. We start by creating a “Unit Test Bundle” target in the project. Create a new objective C class named “CCCSpeexDecoderTest” using the “New File…” dialog and do not choose (unselect) the “also create .h file” option. Include the following in your new Objective-C class file.

//
//  CCCSpeexDecoderTest.m
//  SpeexLib
//
//  Created by Clifton Craig on 11/13/10.
//  Copyright 2010 Craig Corporation. All rights reserved.
//
#include <SenTestingKit/SenTestingKit.h>

@interface CCCSpeexDecoderTest : SenTestCase
{
	NSString* wavFile;
	NSString* speexFile;
}
@end

@implementation CCCSpeexDecoderTest

-(void) setUp
{
	wavFile = [[NSBundle bundleForClass:[self class]] pathForResource:@"sample" ofType:@"wav"];
	speexFile = [[NSBundle bundleForClass:[self class]] pathForResource:@"sample" ofType:@"spx"];
}

-(void) testFirstTest
{
	STAssertNotNil(wavFile,@"A sample wav file is required to test speex decoding.");
	STAssertNotNil(speexFile,@"A sample speex file is required to test speex decoding.");
}

@end

Running this tells us that we’re going to need some speex data to operate on. (I’ve taken the liberty to generate a wav file using the “say” command and converted it to a Speex encoded file using the JSpeex API via Groovy. I’ll include both in a download of the project for this lesson.) Next we’ll create a structure to hold our unit tests and test resources. We will be following the “golden copy” testing pattern. You later learn that using the pattern here is rather fragile, however a more purist approach would take us through an exercise of re-writing the entire Speex project which is outside the scope of my tutorial. Using Finder, I created a “Tests” and a “Resources” folder under my src folder in my project. Drag/drop these folders into XCode to create the corresponding groups. Then drag/drop the sample wave and sample speex files (named “sample.wav” and “sample.spx” respectively) into the “Resources” group in XCode. Running the test will now pass.

We now work our way through creating the decoder. I’ll spare the individual steps in TDD as it would make this text overly verbose and I’ll try to summarize instead. We need an actual decoder instance which we’ll be importing. TDD suggests we import what we don’t have so add the import for a CCCSpeexDecoder type which does not exist. Build and fail. (The failure is important as it formalizes the class or feature you are about to add/change/delete.) We also need to be able to create this type and give it some audio to decode. It will also need a place to send the decoded audio data. I’m going to define an abstraction for providing/receiving the audio data so that we don’t necessarily need a file system so I’m adding a test to demonstrate/document the need for an audio source, a test to demonstrate/document the need for an audio sink, and one other test that formalizes how we plug these two abstractions into the decoder.

#import "CCCSpeexDecoder.h"

@interface CCCSpeexDecoderTest : SenTestCase <CCCSpeexAudioSource, CCCSpeexAudioSink>
{
	NSString* wavFile;
	NSString* speexFile;
	CCCSpeexDecoder *decoder;
}
@end

@implementation CCCSpeexDecoderTest

//...

-(void) testAudioSourceIsDefined
{
	id<CCCSpeexAudioSource> anAudioSource = self;
}

-(void) testAudioSinkIsDefined
{
	id<CCCSpeexAudioSink> anAudioSink = self;
}

-(void) testCanCreateDecoder
{
	[[CCCSpeexDecoder alloc] initWithAudioSource:self andAudioSink:self];
}

And this calls for the additional CCCSpeexDecoder class which defines the abstractions…

#import <Foundation/Foundation.h>

@protocol CCCSpeexAudioSource

@end

@protocol CCCSpeexAudioSink

@end

@interface CCCSpeexDecoder : NSObject {

}

- (id) initWithAudioSource:(id<CCCSpeexAudioSource>) anAudioSource andAudioSink:(id<CCCSpeexAudioSink>) anAudioSink;

@end

#import "CCCSpeexDecoder.h"

@implementation CCCSpeexDecoder

- (id) initWithAudioSource:(id<CCCSpeexAudioSource>) anAudioSource andAudioSink:(id<CCCSpeexAudioSink>) anAudioSink
{
	self = [super init];
	if (self != nil) {

	}
	return self;
}

@end

Now we go back and add one more test that explains what we’re after.

-(void) testCanDecode
{
	[decoder decodeAudio];
}

Build and fail so that we know to define the method.

-(void) decodeAudio
{
}

We now have defined the ability to decode audio. We have to set our expectation for this method. (Test first begins with declaring or expressing a need for a feature or function then setting an expectation for its behavior.) After invoking decodeAudio we would expect to have collected the decoded audio bytes somewhere. I’ll add a mutable data fieldin the test for this.

@interface CCCSpeexDecoderTest : SenTestCase <CCCSpeexAudioSource, CCCSpeexAudioSink>
{
	NSString* wavFile;
	NSString* speexFile;
	CCCSpeexDecoder *decoder;
	NSMutableData *decodedAudio;
}
@end

@implementation CCCSpeexDecoderTest

-(void) setUp
{
	wavFile = [[NSBundle bundleForClass:[self class]] pathForResource:@"sample" ofType:@"wav"];
	speexFile = [[NSBundle bundleForClass:[self class]] pathForResource:@"sample" ofType:@"spx"];
	decoder = [[CCCSpeexDecoder alloc] init];
	decodedAudio = [[NSMutableData alloc] init];
}

And we add a test to exercise the method and document/verify our expectation:

-(void) testDecodeAudioFillsDecodedAudio
{
	STAssertTrue([decodedAudio length] == 0, @"Should NOT have accumulated data");
	[decoder decodeAudio];
	STAssertTrue([decodedAudio length] > 0, @"Should have accumulated data");
}

Here’s the Oogly part. We are calling a method with no return value. We’ve defined an abstraction around collecting data (an audio sink) and we’ve made our test case adopt the protocol for this abstraction. The protocol defines no methods. The test calls for data to magically arrive in the mutable data field. Indirectly, our test is stating that given a source and a sink, when the decodeAudio message is sent we should have accumulated data in the sink. running the test fails because we haven’t added the functionality. We step into the decodeAudio implementation and fill in the simplest thing that works.

-(void) decodeAudio
{
	NSString *pretendData = @"pretendData";
	[audioSink audioWasDecoded:
		[NSData dataWithBytes:[pretendData cStringUsingEncoding:NSUTF8StringEncoding] length:[pretendData length]]
	 ];
}

You see we are talking to an audioSink object here. Because we don’t really have an audiosink object in scope (I just made it up b/c it felt right) we need to declare it.

@interface CCCSpeexDecoder : NSObject {
	id<CCCSpeexAudioSink> audioSink;
}

If we run we still won’t get satisfaction because we haven’t ensured that the audiosink given in the constructor is the one we talk to when we decode audio. So we revisit the init method.

- (id) initWithAudioSource:(id<CCCSpeexAudioSource>) anAudioSource andAudioSink:(id<CCCSpeexAudioSink>) anAudioSink
{
	self = [super init];
	if (self != nil) {
		audioSink = [anAudioSink retain];
	}
	return self;
}

We also need to release in our dealloc.

- (void) dealloc
{
	[audioSink release];
	[super dealloc];
}

Let’s be more specific. When decoding audio we will want to discover the meta data or attributes of the audio. This information is usually the first group of bytes in a file and it explains what the rest of the file contains. We’ll declare an expectation to receive a callback in our sink which contains the meta data in an easily navigable NSDictionary.

-(void) testDecodeAudioReturnsHeaderInfoToSink
{
	STAssertNil(headerInfo, @"We should start with no header info.");
	[decoder decodeAudio];
	STAssertNotNil(headerInfo, @"We should now have header info.");
}

and we need to add an NSDictionary field to our test to record the header info.

@interface CCCSpeexDecoderTest : SenTestCase <CCCSpeexAudioSource, CCCSpeexAudioSink>
{
        //Other fields...
	NSDictionary *headerInfo;
}
@end

we add the simplest thing that will work.

-(void) decodeAudio
{
	NSString *pretendData = @"pretendData";
	[audioSink headerWasDecoded:[NSDictionary dictionary]];
	[audioSink audioWasDecoded:
		[NSData dataWithBytes:[pretendData cStringUsingEncoding:NSUTF8StringEncoding] length:[pretendData length]]
	 ];
}

…And this calls for an additional method in our AudioSink protocol.

@protocol CCCSpeexAudioSink <NSObject>

-(void) audioWasDecoded:(NSData*) someDecodedAudio;
-(void) headerWasDecoded:(NSDictionary*) theAudioAttributes;
@end

Which bleed back into the test where we store the attibutes as our header info. Add the following to the test case.

-(void) headerWasDecoded:(NSDictionary*) theAudioAttributes
{
	headerInfo = theAudioAttributes;
}

Now we’ll look at individual attributes given to the sink during the parse. We set some expectations for numeric values mapped to specific keys in the header info.

-(void) testDecodeAudioHeaderInfoIncludesSpecificValues
{
	[decoder decodeAudio];
	NSNumber *value = [headerInfo valueForKey:@"sampleRate"];
	STAssertNotNil(value, @"Should have returned a number");
	value = [headerInfo valueForKey:@"frameSize"];
	STAssertNotNil(value, @"Should have returned a number");
	value = [headerInfo valueForKey:@"numberOfChannels"];
	STAssertNotNil(value, @"Should have returned a number");
	value = [headerInfo valueForKey:@"decodeBlockSize"];
	STAssertNotNil(value, @"Should have returned a number");
	value = [headerInfo valueForKey:@"framesPerPacket"];
	STAssertNotNil(value, @"Should have returned a number");
}

And as you’ll note a pattern here we should do some refactoring.

-(void) assertNumericValueInDictionary:(NSDictionary*)aDictionary atKey:(NSString*)aKey
{
	NSNumber *value = [headerInfo valueForKey:aKey];
	STAssertNotNil(value, @"Should have returned a number");
}

-(void) testDecodeAudioHeaderInfoIncludesSpecificValues
{
	[decoder decodeAudio];
	[self assertNumericValueInDictionary:headerInfo atKey:@"sampleRate"];
	[self assertNumericValueInDictionary:headerInfo atKey:@"frameSize"];
	[self assertNumericValueInDictionary:headerInfo atKey:@"numberOfChannels"];
	[self assertNumericValueInDictionary:headerInfo atKey:@"decodeBlockSize"];
	[self assertNumericValueInDictionary:headerInfo atKey:@"framesPerPacket"];
}

Because I forget the attributes of the file provided I’m going to use a discovery test technique. With this technique we use a dummy expected value in our assert and allow the assertion error message tell us what the actual value is. I wouldn’t do this in normal testing. It’s only because I already have working code that I’m plugging in and because this tutorial is getting wordy that I’m going to take the cheap way out.

-(void) assertIntValue:(int)anInt isInDictionary:(NSDictionary*)aDictionary atKey:(NSString*)aKey
{
	NSNumber *value = [headerInfo valueForKey:aKey];
	STAssertNotNil(value, @"Should have returned a number");
	STAssertEquals([value intValue], anInt, @"Integer value %i should exist for key %@", anInt, aKey);
}

-(void) testDecodeAudioHeaderInfoIncludesSpecificValues
{
	[decoder decodeAudio];
	[self assertIntValue:-999 isInDictionary:headerInfo atKey:@"sampleRate"];
	[self assertIntValue:-999 isInDictionary:headerInfo atKey:@"frameSize"];
	[self assertIntValue:-999 isInDictionary:headerInfo atKey:@"numberOfChannels"];
	[self assertIntValue:-999 isInDictionary:headerInfo atKey:@"decodeBlockSize"];
	[self assertIntValue:-999 isInDictionary:headerInfo atKey:@"framesPerPacket"];
}

Once we implement the actual parsing logic we will start to see the actual values reported in the assertion errors. (I am adapting existing working code rather than developing the code from test cases.) We will pull the values from the errors back into the asserts to make the test pass and document what our expectations actually are.

Now we need to actually start pulling audio from our audio source abstraction. Because we used protocols, our test can pose (using the self-shunt pattern) as the audio source and provide data for the decoder. We step into the decoder and start doing some actual parsing.

-(void) decodeHeader
{
	[audioSink headerWasDecoded:[NSDictionary dictionary]];
}

-(void) decodeAudio
{
	NSString *pretendData = @"pretendData";
	[self decodeHeader];
	[audioSink audioWasDecoded:
		[NSData dataWithBytes:[pretendData cStringUsingEncoding:NSUTF8StringEncoding] length:[pretendData length]]
	 ];
}

Importing OGG

At this point we have to import OGG for decoding the container so we can read the file meta data. Download and unpack libogg (not liboggz) from the Xiph.org download site.

We need to add the ogg header files to the header search path, so drag/drop the ogg folder from the include folder in the root of the unpacked directory into your XCode project. (/path/to/libogg-1.2.1/include/ogg) Choose to Copy the files from the dialog and select your static lib target before accepting the dialog. Delete the config_types.h.in and makefile.am and Makefile.in from this folder and group. (Also move them to trash.) Double click the project icon in the left tree pane and select the “Build” tab. Type “header search” in the search box at the top to narrow the options to the header search path. You need to add, “$(SRCROOT)” as one of your header search path values here. Create an XCode group for the ogg source code and drag/drop the “bitwise.c” and “framing.c” files from the unpacked libogg source folder. (/path/to/libogg-1.2.1/src).

At this point building unit test target should leave you with errors from the latest round of header info asserts which we will fix in the next part of the series. We have a fully configured project with access to both the speex and ogg encoding/decoding APIs which is exciting. In the next part of the series we will tackle calling into these APIs to decode the data. I’m going to upload my part II example project to my box account so it will be in the right and pane for your downloading pleasure. Until next time…

(Some of you will have noticed I accidentally published this post the other day before finishing it. This is why I’m publishing it half baked tonight. There’s alot here and a lot more to cover. Keep checking back for updates!)

XCode 3.2.4 Bug!


Just a quick note to those of you working with XCode 3.2.4 and trying to get any sort of unit testing working. There is a known bug/issue in XCode that causes unit test builds to crash. As it is written here (and here), “some part of the reporting code not respecting timezone issues. The output is tagged as ending before it began, so gets very confused and chokes.” The solution is to edit the Run Script step of the unit test target and make it look like this:

"${SYSTEM_DEVELOPER_DIR}/Tools/RunUnitTests" 1> /tmp/RunUnitTests.out 

Works for me!

You must fail before you can succeed!


If you try to fail and you succeed, which have you actually done? Are you a success? Or a failure? Is it good to be a failure? Is it better to be successful at failure? Let’s add detail to the question. By the way, I’m Cliff. You’re here because you tried to fail and you succeeded. Today’s topic is something I’ve visited before. It’s a new thing I’m trying with unit tests. Actually it’s an old thing to many but I’m trying it for the first time in both C++ and ObjC so it feels sorta new-ish.

How do you unit test?
Let’s start with how you unit test. What are your steps? What are the recommended steps? In order to be successful at TDD you must appreciate the entirety of the practice. It goes, “Red, Green, Refactor”. Red comes before Green, just like with traffic lights. What I’m saying is that you have to begin with a failing test. The first test is important. The first failure should describe what work you have to do. In my case, I’m swimming in un-ventured waters (C++/ObjC++ testing) so there’s some learning that needs to be re-enforced. Here’s how I’ve been starting my tests recently:

//
//  MyCoolNewObjectTest.m
//  Created by cliftoncraig07 on 12/1/09.
//  Copyright 2009 __MyCompanyName__. All rights reserved.
//
#import <SenTestingKit/SenTestingKit.h>
#include "MyCoolNewObject.h"

@interface MyCoolNewObjectTest : SenTestCase
{
  MyCoolNewObject *coolObject;
}
@end

@implementation MyCoolNewObjectTest

-(void) setUp
{}

@end

The test shell is completely empty except for references to the “thing” I’m about to create. I get my first failure which is a compile error stating that this thing does not exist. “No such file error…” around the include. Here I have an opportunity to review my design as minimalist as it is. I ask, “Does the error make sense? Is it expected? Do I like the name of this cool new thing I’m creating? Is it specific to the task I’m assigned to?” Always review each error with these question. After creating the files for the new “thing” I then follow up with:

//
//  MyCoolNewObjectTest.m
//  Created by cliftoncraig07 on 12/1/09.
//  Copyright 2009 __MyCompanyName__. All rights reserved.
//
#import <SenTestingKit/SenTestingKit.h>
#include "MyCoolNewObject.h"

@interface MyCoolNewObjectTest : SenTestCase
{
  MyCoolNewObject *coolObject;
}
@end

@implementation MyCoolNewObjectTest

-(void) setUp
{ STFail(@"You must fail before you can succeed!"); }

@end

Also important, while I train myself on the new testing framework, because I need to catch myself misnaming the “setup” method which should have a capital “U”. It also lets me know that my test is actually running as part of the suite. Far too often, in Xcode, I’ll have the wrong target active and begin writing the wrong code because I was getting false positives from tests that were never run. Here’s where it gets interesting. The STFail in the above example does not fail! Now we face our original question, if you try to fail, as we have above, and you succeed like our test suite will do here, which have you actually done? The first time I hit the unexpected success I got nervous and read all around the SenTesting framework and OCUnit. Eventually I settled on the conclusion that because there were no tests to run the setUp was being optimized away as unnecessary. What the above example is pointing out is that such a test case can never fail since there are no tests. That leads us to our final step…

//
//  MyCoolNewObjectTest.m
//  Created by cliftoncraig07 on 12/1/09.
//  Copyright 2009 __MyCompanyName__. All rights reserved.
//
#import <SenTestingKit/SenTestingKit.h>
#include "MyCoolNewObject.h"

@interface MyCoolNewObjectTest : SenTestCase
{
  MyCoolNewObject *coolObject;
}
@end

@implementation MyCoolNewObjectTest

-(void) setUp
{ STFail(@"You must fail before you can succeed!"); }

-(void) testSomething
{}

@end

…and here we get our familiar red bar! Our test case is complete and we now understand a little more about OCUnit. That’s it for today. Go on. Nothing else to see here. I know what you’re thinking. “We haven’t written or learned anything new!” Sure we have! We’ve written and validated our first test case in ObjC++. (I’m using OCUnit w/ C++ extensions to exercise or test drive C++ code.) The little amount we went over here is persistent through all the testing you will do from then on. It starts from the basic mechanics. Make sure every line of code is proceeded by some test (or compiler) failure. If you’ve done more than 2-3 things and haven’t run a build to generate a failure then you’re completely off track.

AssertRunsInAnotherThread ?


**Update**
I’ve updated my example below after realizing I left out some important pieces…
**Update**

How do you assert that something runs in a different thread? Or more generically, how do you assert that something runs asynchronously? I need to do this in ObjC but also in Java. The scenario is as follows. I have a WhatzItz object instance stored in myWhatzItz. I tell myWhatzItz to doDisThing. I need to specify that myWhatzItz will doDisThing asynchronously. Experience tells the story that multi-threaded unit tests are dumb, complicated and wrong. So asking myWhatzItz to doDisThing then waiting or polling for completion of doDisThing is not what we want to do here. (It may be what you want to do but it’s not what I want. Since we’re on the same team and since I’m doing the writing you have to want what I want not what you want. I want you to want my wants… forget your wants… your wants are dumb anyhow!)

So how do we design myWhatzItz to doDisThing and doDisThing right? I’m going to suggest an interface… err prototcol. That is an abstraction for what we want to design. In unit testing we stay focused on one thing. All distractions are tossed behind abstractions. So after asking myWhatzItz to doDisThing I need to assert that it uses asynchronous behavior. That brings us a step back. I have to design a dependency on asynchronous behavior first. Right now you’re probably wondering how asynchronous behavior works. Right now I’m ’bout to slap you like Bruce Lee slapped that boy in Enter the Dragon. (Or was that Return of the Dragon? Maybe it was the game of death? Bonus points to the 1st person that gets the movie correct with a link to the clip on YouTube!) “Don’t focus on the finger or you’ll miss the beauty in the sky above!”, says Bruce. We don’t care about how asynchronous behavior works, as its irrelevant to our design. We only care that our work is done asynchronously! So we’ll code this test in ObjectiveC:

#import
@protocol MyAsynchronousBehavior
-(void) performAsynchronously:(SEL) aSelctor onTarget:(id)aTarget withObject:(id)anObject;
@end

@interface MyWhatzItzTest : SenTestCase <MyAsynchronousBehavior>
{
  MyWhatzItz *myWhatzItz;
  SEL selectorPerformed;
  id targetForSelector;
  id parameterForSelector;
  int performAsynchronouslyInvocationCount;
}
@end

@implementation MyWhatzItzTest

-(void) setUp
{
   myWhatzItz = [[MyWhatzItz alloc] init];
   //The test uses the self-shunt pattern to listen for MyAsynchronousBehavior interaction.
   id theAsynchronousBehavior = self;
   myWhatzItz.asynchronousBehavior = theAsynchronousBehavior;
}

-(void) testMyWhatzItzWillAsynchronouslyDoDisThingWhenAskedToDoDatThing
{
  STAssertEquals(performAsynchronouslyInvocationCount, 0, @"Assuming that we haven't been asked to performAsynchronously");
  [myWhatzItz doDatThing];
  STAssertEquals(performAsynchronouslyInvocationCount, 1, @"We SHOULD HAVE been asked to performAsynchronously");
  STAssertEquals(selectorPerformed, @selector(doDisThing:), @"Should have asked to doDisThing using asynchronous behavior.");
  STAssertEquals(targetForSelector, myWhatzItz, @"Should use myWhatzItz for asynchronous execution.");
}
@end
#pragma mark MyAsynchronousBehavior protocol methods
-(void) performAsynchronously:(SEL) aSelctor onTarget:(id)aTarget withObject:(id)anObject
{
  performAsynchronouslyInvocationCount++; selectorPerformed = aSelctor; targetForSelector = aTarget; parameterForSelector = anObject;
}

Supplying the asynchronous behavior becomes a construction/intialization concern, which could be extracted into a wiring framework like Interface Builder (or if you do Java, Springframework). The nice thing is that we can alternate our asynchronous behavior without touching the rest of the project. We can do something quick/dirty like detachThread… and later get more elegant with RunLoops. Today’s tip comes to you (me) completely by accident. I’ve been trying to test drive things in Java and ObjC for several years now and I’ve never got this part quite right. Feel free to share your opinions/experiences with threads and tests.

**Update**
The above example demonstrates a few important unit testing concepts. first it uses the self-shunt pattern to allow the test to become a mock and listen for interactions between the tested object and its collaborator(s). Second, it demonstrates the importance of using tests to carve out the pieces that should NOT be under test. In this situation it’s multi-tasking, however in other scenarios it could be anything from a DBMS to a network call. I could have illustrated an example where I spin up a Thread pool of some sort from within the test started work, paused the main thread and waited for a response. That’s what I’ve done in the past and it becomes extremely complicated and brittle. In stead, I use the elegance of abstraction to make the test 100% predictable, 200% more simplistic, and 300% more maintainable. Where do I get those statistics? The short answer is they’re sequential. (The long story begins yesterday when my 5yr old spontaneously decided to count from 1 to infinity in the middle of our shopping trip. You see, she only recently learned her numbers up to 100 and after getting to 199 we all realized the child’s computational limits as she continued with one hundred one hundred then followed up with one hundred one hundred one, one hundred one hundred two…)
**Update**

Mock It To Me!


I’m playing with OCMock on a random iPhone project and having nothing but pain. Naturally my only outlet is to you, dear reader. Thanx for stopping by and thank you all for the comments you’ve been leaving. Not everyone realizes how important it is not only to comment with well formed answers but to reply with well thought out questions and problems. We all suck our knowledge from the same cloud of computers nowadays. I don’t know about you but I constantly find myself trapped behind a convoluted compiler or runtime error and an unanswered question on some forum. No matter how good Stackoverflow is getting these days it still does not have AI and cannot answer all of your questions in twenty minutes or less. That leaves many adventures and problematic programming paths without a “happy ever after”. Hi, I’m Cliff. My job is to continually post cryptic compiler messages and explanations on this site for all to follow. One day my Ode to an EmptyStackException will be the saving grace of a project on the brink of a deadline. (I’ll gladly share my PayPal to those interested in tipping should any of my topics be informative enough to land such a multi-million dollar deal.) I digress.

Today, as I was saying earlier, I wrestle OCMock for iPhone development. When I setup an iPhone project I drag/drop the OCMock.Framework folder in, attach it to my UnitTests target, and double check the framework search paths build property for my test target. I then use a slightly modified version of the OCMockObjectTests to validate all of the features are available. On my latest project I got anal. There was a particular feature (ordered method invocations) that wasn’t in the release I downloaded last year. I updated my version of OCMock then instead of modifying its test suite I copied it into my project. Certain things wouldn’t compile for one reason or another so I hacked around a bit. I got down to a snippet that was attempting to return a “BOOL” from a mock and began seeing these errors:

expected specifier-qualifier-list before ‘typeof’

I remember these errors from a while ago when I ran across an article describing a similar solution. I hacked the code around a bit going as far as to download OCMock’s source and run the tests from within. The funny thing is that these tests pass without issue, possibly because they’re running in OS X world. When run in iPhone world things seem to be slightly different. Here’s what I see in particular.

This works:

BOOL variable;
[NSValue value:&variable withObjCType:@encode(BOOL)]

This doesn’t work:

BOOL variable;
[NSValue value:&variable withObjCType:@encode(typeof(variable))]

There are a few other subtle issues that I can’t get into because I hacked around so much I lost my place. I’m now facing a “Bus error” when I run through the remaining tests. I you know what I don’t, then type one of those comment things. You’ll not only improve my life, but you’ll enrich the lives of hundreds of Googling souls facing similar trouble.

CoverStory on IPhone projects


I really, really, really don’t care about test coverage when I develop because it’s one of those things you get for free when you follow the right practices. I always thought projects like Clover and Cobertura were a waste of time. However I recently started dreaming up an interesting use case for these kind of tools. Help me, if you will, get CoverStory (a test coverage tool for ObjC) up and running so I can prove myself wrong. I’ve followed the [sparse] documentation and steps on the CoverStory home site. There’s a section on including an alternate, fat libgcov.a file that confused me. Not knowing what path to what file or where to set the path I stumbled on a way of satisfying the “no such file for -lgcov” error by dragging/dropping the fat libgcov.a file into my project. Now when I build I get the .gcno files but no .gcna files and Coverstory won’t report coverage. CoverStorySettings

iPhony Frameworks


So I’m writin’ all kinds of Objective-C code, right? And I’m finally in my element because I got Google Tools For Mac doin’ the SenTest thing, right? I even figured out how to include modules using project relative paths. That’s when my trouble started. Y’see, I thought frameworks were no different than modules. Of course they’re different or else they wouldn’t be called frameworks! Still, I blurred the line between the two. So then I’m writing all these gnarly tests using OCMock. Remember OCMock? I complained on the forums about adding it to an iPhone project. then I finally found a hack to get it to run in my iPhone tests. Then the whole module thing started to make me feel warm and fuzzy inside so I thought I had a better approach. At any rate, I tried the same approach with Hamcrest, a tool advertised on the OCMock home page. I couldn’t get it to work. Finally, it dawned on me! You can’t add frameworks to iPhone projects! [Honestly it didn’t dawn on me, I had to be told then re-told by someone who knows waaay more about iPhone stuff than me.]

The point is that Frameworks in XCode iPhone projects don’t work like modules. With a module, you can just drag/drop the module file into your project, set its dependencies in your target, build and then you’re off and running. Frameworks are different. Frameworks have to exist (to the best of my knowledge and until someone who knows stuff tells me some new stuff) under /Library/Frameworks in order for SenTest test cases to use them. Frameworks can not be used at all during run time in an iPhone project. I don’t even think they’ll run in the Simulator which means no debugging unit test code, which means you better write some very fine grained unit tests which turns out to be necessary anyhow. That’s the point. You can’t run Framework code in an iPhone project but you can run it in a GTMSenTestCase as part of the build (not build/go process). If anybody knows better speak up now or I’ll forever hold out the peace sign.