Ruby Rails with minimal Postgres


Today I learned how to build a Rails app without having all of Postgres installed on my machine. My whole purpose is to avoid adding too much software to my Mac. I am running Postgres in a Docker container instead. I hit a snag because it appeared that the pg gem needed all of Postgres installed. doing a bundle install on the project was giving me an error about a missing pg_config. Not knowing much I googled around and everywhere I looked seemed to indicate that you have to run brew install postgresql to work around the error. After discussing with a colleague I realized I just needed the PG APIs which live in the libpq brew formulae. The actual solution is to brew install libpq then export PATH=$PATH:/usr/local/opt/libpq/bin to allow the Ruby to find the pg binaries required to build the gem. I’m parking this tip here for the future me who will come back in a year or so puzzled as to how I accomplished this. Future me, you’re welcome!

GraphQL and React Tutorial Part 1


Back from a several month long hiatus, I’m coding, experimenting, and finally blogging again! This afternoon I wanna try to create a GraphQL tutorial. The idea is to introduce this new concept to people with maybe just little JavaScript experience and to be as beginner friendly as I possibly can. It’s focused on GraphQL but I’m also using a React frontend to display it. I originally intended to cover this in a video, then I thought I’d blog it because I was at a noisy Starbucks waiting for my car. I thought I could squeeze this into one post but I needed to be verbose enough to explain it for newbies. I’m finally deciding to release this as a multi-part blog series. So, with no further ado (why people tend to take there “ado” several steps further always perplexes me), I give you… LESSON.

What is GraphQL?
Let’s start by discussing what ISN’T GraphQL. It is NOT a Facebook project. It’s not a framework, it’s not a library, it’s not a replacement for RESTful services, it’s not the thing that’s going to replace you or your current work responsibilities. At it’s core, GraphQL is a specification for a Query Language to interact with an API or se of APIs, along with a runtime. That’s a bit to ingest. In simpler form, GraphQL is merely a set of rules that explain how to interact with an API (Application Program Interface) over the internet. It’s like a blueprint of best practices for using an internet API. There are many implementations of GraphQL that you can find online. Github has a GraphQL front end for their APIs with an interactive explorer web page you can use to experiment and learn. I will cover this special tool later on. I will be using an implementation written in JavaScript.

Why is GraphQL?
GraphQL began its life at Facebook as an idea of pulling data together in an efficient way for mobile devices. Way back, when Facebook for mobile was little more than a glorified webpage tucked inside a native app the engineers sought an efficient way of pulling the famous feed, interacting with the buddies list, and other uses of the Facebook API. They developed GraphQL to solve many of their pain points. GraphQL allows you to pull back data dynamically in many different varying forms with a single call to a single endpoint. It is flexible enough to evolve as your needs change without requiring deployment or re-deployment of different RESTful services or other complex infrastructure challenges. I’m gonna stop here because I’m already starting to sound like some sort of weird technical infomercial.

How do I GraphQL?
Now let’s hammer into the meat and potatoes of our lesson. (Why I chose to use a hammer with my “meat and potatoes” analogy will remain a mystery but just follow along, k?) We will begin with an ExpressJS server using nodeJS. We will introduce GraphQL with the ExpressJS server, then eventually create a React app that uses GraphQL to build a fake social network site. We’ll use and entirely original name for the site, FaceBox. If that sounds like a mouthful, relax. I’ll explain each piece. Just know for now, that there will be 2 JavaScript programs in this tutorial. One program will run directly on your computer and the other program will run in your web browser.

NodeJS is a platform that allows you to write programs with JavaScript that run outside the browser. Traditionally JavaScript was developed and intended for web browsers but some creative folks took it a step further and created this platform which runs on any computer. You can download and install NodeJS from here. Once you’ve done that you will be able to use a special command called npm to begin developing your program.The npm command is a special command line tool used to create Javascript projects and also fetch JavaScript packages from the internet. Because nobody writes a program entirely from scratch (the same way nobody boils there own molten metal to build a car or cooks the rubber for the tires) we use npm to assemble our JavaScript program from various open source packages. We won’t focus too much on nodeJS, npm, or ExpressJS. These are just a few of many packages we will use to assemble our package.

With the NodeJS part explained let’s define the other pieces. ExpressJS is a web framework that helps us build our web server, or program that serves web pages when you browse it. It’s built completely in JavaScript so you can use the same language you use in front end development to become full stack. React is a JavaScript library that allows you to build user interfaces using a special XML like syntax.

Web Server Talk
Before we start going full stack and writing code it’s important to be familiar with some basic web server terms like web addresses, and http. HTTP and HTTPS are the two popular sets of rules known as protocols which are used on the internet. In short, your web browser sends a GET request whenever you browse a web site or click a link. It’s like it’s saying to the web server, “Hey, get me the home page on twitter’s domain.” The web browser also sends either a POST or PUT request when you fill out a form, or upload an image or a video. Lastly, your browser can send a DELETE request whenever you want to delete something like removing a post or a TWEET from your timeline. There are other request types but these 4 are the more popular ones. These are the request types used to build what we call RESTful web services. I won’t go too much more in depth on how HTTP works but if you’re interested you can follow Julia Evans who does and excellent job explaining it with her comic-zines.

A web address also known as a URL is a String made up of several parts, a protocol, a host name, a port number, a context path and a query string. Let’s look at Twitter for eg. Type https://twitter.com:443/home?newtweets=true into your web browser to go to the twitter home page. The first part is the protocol, https, which stands for HTTP Secure. The protocol could be http instead, which is HTTP without the security piece. It works similarly. Most sites these days use https however there was a time where http (without the “s”) was common. The next part is the domain or host name, in this case twitter.com. This is the actual computer or machine your browser is connecting to. When you typed the example into your browser you probably noticed the port number disappeared. This is a convenience because port 443 is the default for any https web server so you don’t have to type it. The last part is the context path, /home. This directs the browser to different places on a particular domain or host. The last part is the query string. I made this ?newtweets=true query string up as it could be anything. It is just a set of name=value pairs that are introduced with a question mark. Twitter doesn’t actually use a newtweets=true query string so including it does nothing. I won’t use into query strings in this tutorial but it’s good to know what they are and where they exist in a URL.

Now with a basic definition of some core concepts we can start. Create a folder anywhere on your computer and call it faceboxGQL. Open a command terminal and change to this new directory you created. If you feel lost there are resources to help you understand how to navigate your file system on Mac or on a Windows system. Once there run the npm init command. This special command will give you a tech interview, asking a bunch of questions you might not have the answer to. Don’t sweat and just hit “Enter” on each question. It turns out that npm already knows the answer to its own questions and it merely enjoys interrogating newcomers. Once you’ve created your project you can start adding packages to it.

Run npm add nodemon express to add the first 2 important packages. The express package is a Web Server that we will use to host the GraphQL API. Next run npm add --save-dev nodemon babel-cli babel-preset-env babel-preset-stage-0 to add a few more packages. the nodemon is a tool we will use to monitor the files we will eventually add and make changes to. The babel packages are tools we will use to convert our JavaScript program from one form to another (a process known as transpiling). We use the --save-dev flag on the command line to mark these packages as development time packages. It’s a minor detail for now, but if you were to publish your program on the internet then the development packages would not be included. Inside the package.json file find the scripts section and add the following code inside it:

"scripts": {
  "test": "echo \"Error: no test specified\" && exit 1",
  "start": "nodemon ./index.js --exec babel-node -e js"
},

This block defines a start script, which holds a special command we’ll use to start the expressJS Web server. It might look like Greek or Spanglish but stay with me. We’re building towards something here.

WWW? Wait, What Web Server???
I’m so glad you asked! We don’t have a web server. We’re going to make one! That’d right, if you’re new to backend development or desiring to eventually graduate from an iHop short stack and be full stack then this is where the rubber meets the road. Create a new file in the project folder called index.js. This is where we will add the Javascript code that builds our web server. Add the following inside of index.js:

import express from 'express';

const PORT = 8090;

const app = express();

app.get('/', (req, res) => {
    res.send('GraphQL is AmAzInG!');
});

app.get('/graphql', (req, res) => {
    res.send('GraphQL is Not available!');
});

app.listen(PORT, () => console.log(`Running server on localhost:${PORT}/graphql`) );</pre>

That’s it! With this little bit of code you have enough to run a web server directly on your computer. The first line is an import. It defines a variable named express and imports or loads it with “stuff” from the express package that we downloaded earlier with the npm command. You don’t need to worry about the “stuff” that’s floating inside this variable anymore than you need to worry about everything that’s rotating/jiggling under the hood of your car. (That is, you only need to worry when things jiggle loosely or all smokingly.) The next line defines a port that you will browse with your web browser. Most people know what a URL or web address is but what you may not know is that you usually browse a port on a web browser. For most activities like checking Facebook or sending tweets you are browsing a default port number 443 which the browser understands so it’s not included in the URL. In this lesson you will browse your web server on a different port so we define it here. The next line creates an app object from the express package which lets you listen for different HTTP requests coming from the web browser.

The next two blocks attach some fancy ES6 functions to two different kinds of HTTP GET requests your web browser can make. The first block defines a request to the root context path, ‘/’, which is like the default. It uses an ES6 function that takes two variables, req and res for the incoming request and outgoing response respectively. The outgoing response variable holds an object that we use to call the send function inside the block. We “send” a String response which literally says, “GraphQL is AmAzInG!” The 2nd block is like the first, only it attaches to a different request context path, “/graphql” and sends a different response. A context path is the part of a web address or URL that comes after the host name. We are using the 1st block to verify that the server is up and running and this 2nd block as a placeholder where we will eventually attach our GraphQL server logic. Finally we have the last line which tells the Express app to listen on the special PORT defined earlier.

With this we are ALMOST ready to run our server but first we need to define some presets for Babel. These presets are plugins that babel uses to support different language features. Because JavaScript is a huge language with several features and because I really want to focus primarily on GraphQL, I’m going to gloss over the details of Babel presets and ask that you trust me for a moment. Create a new file called .babelrc in your project folder then copy/paste the following into it to get us moving to the next step.

{
    "presets": [
        "env",
        "stage-0"
    ]
}

At this point you should be able to run your server by using the npm start command on the command line from within your project folder. After it starts you can browse the server by entering http://localhost:8090 in the address bar of your web browser. This web address has two parts, a host name localhost and the port we discussed earlier, 8090. Usually you enter web addresses without a port because the web server is running on a special port 443 which the web browser already knows to connect to. When you have a server running on a different port then it becomes necessary to include it as part of the web address.

If there are errors there are a few things you can double check. First, make sure you don’t have any other web server running on the port we defined in code. You can check this by opening your browser and trying to browse the same port without starting the server. Enter http://localhost:8090 in your browser’s address bar and see if anything appears. If you get a connection error or a site cannot be reached message then it means there isn’t any other server running on that port. Only one server can run on any given port. If you do get a page then change the const PORT = 8090 in the code to a different number run the npm start command and use this new number in your web address. If there is another error you can check the code for typos and finally try deleting the node_modules folder and running npm install command again to reinstall the project’s packages.

Finally we come to the GraphQL part. Assuming everything is working so far go back to your code editor. Also you can stop the server by typing Ctrl-C on the command line. (This special hot-key sequence will abruptly kill any program that is currently running on the command line.) We will add 3 important pieces, the GraphQL packages, a schema, and a resolver. These will be enough to build our very own GraphQL server.

GraphQL packages
Run npm install graphql express-graphql on the command line to install the GraphQL package and the Express extension for GraphQL. The graphql package contains the core GraphQL components while express-graphql contains the component objects we will use to connect our Express web server to the core. Add the following two lines to the top of the index.js file:

import graphqlHTTP from 'express-graphql';
import schema from './schema';

These lines import the express-graphql extension and a schema.js file which we have not yet defined or created. Now open the index.js file and change the 2nd block we discussed earlier to the following.

app.use('/graphql', graphqlHTTP({
    schema,
    rootValue: root,
    graphiql: true
}));

This code calls the use function of the app object instead of the get function. In this case we are using an extension with the /graphql context path rather than connecting an ES6 function to the get method. It’s as if we’re telling express, “Hey, use this graphQL function with any request that has ‘/graphql’ as its context path.” We’re passing three values in an object to this graphqlHTTP function, a schema, a root resolver, and a boolean flag as part of a “graphiql” option. This boolean flag will enable the GraphiQL explorer interface, which is a tool that runs in the browser and lets you explore the available APIs presented by GraphQL. That will make more sense in a few, but let’s move on to the other pieces. The schema and root resolver have not yet been defined. We’ll cover that in a moment. The schema is an object that holds the definition of your API. Think of it as the ingredients section on a box of Cheerios. The root resolver is the object that actually is the API. It can be a source of data and function calls itself or work on behalf of an existing API. Let’s look at both of these objects in depth.

The Schema
Create a new file in your project folder called schema.js and copy/paste the following inside it:

import { buildSchema } from 'graphql';

const schema = buildSchema(`
    type Query {
        hello: String
    }
`)

export default schema;

Here we import a buildSchema function from graphql at the top. We then call this function passing what looks like JSON text. This is schema definition syntax. It defines a query object that holds a single API called hello. The hello API returns a String type identified by the colon following the text hello in the curly braces. That is, the colon after the hello name separates the returned type from the name of the API. With this little bit of code we have defined the shape of our first GraphQL API. This schema definition is passed to the buildSchema function which returns a schema object. We then export the schema object as the default export from this file, which means it is the thing that is immediately visible when another JavaScript file attempts to import from this file. This schema object will be the same object that we will give to the graphqlHTTP function from the prior code snippet above.

The Resolver
The last piece we need is a query resolver. This is a special piece of code that resolves or connects a GraphQL query to some existing data or object. Open the index.js file and add this text right before the second code block we just updated with app.use.

//Query Resolver
const root = { hello: () => "Hello, It's FaceBox!"}

Here we are defining an object that has a single ES6 function named hello. This object is what we give to graphQL in the next block and it is what GraphQL will use to look up anything defined in the schema. The final index.js should look like this.

import express from 'express';
import graphqlHTTP from 'express-graphql';
import schema from './schema';

const PORT = 8090;

const app = express();

app.get('/', (req, res) => {
    res.send('GraphQL is AmAzInG!');
});

//Query Resolver
const root = { hello: () => "Hello, It's FaceBox!"}

app.use('/graphql', graphqlHTTP({
    schema,
    rootValue: root,
    graphiql: true
}));

app.listen(PORT, () => console.log(`Running server on localhost:${PORT}/graphql`) );

From the top we’ve imported the graphqlHTTP function from the express-graphql package. We’ve imported schema from our newly created schema.js file. We’ve defined an object named root with a single function named hello that returns a string, “Hello, It’s FaceBox!” We pass both the schema and the root object to the imported graphqlHTTP function and set a special graphiql flag to true. This flag activates the powerful GraphiQL explorer interface, which lets you explore all of the APIs you’ve defined in your schema. At this point you should be able to run your very first GraphQL server. Open your command terminal and run the command npm start. This should start the server and spit out a few lines of log text. You can expect to see any but not all output like, “[nodemon] to restart anytime…”, “[nodemon] watching dirs(s): *.*”, “[nodemon] watching extension: js”, “[nodemon] watching the Simpsons on Fox network, did you set your VCR to tape it?”

Open your web browser and enter the following web address, or URL: http://localhost:8090/graphql This will open the GraphiQL explorer. We’ll cover the explorer in detail in the next part of this series. For now, give yourself a pat on the back and congratulations! You’ve implemented a GraphQL endpoint and this is just the beginning!

Also, check back over the next several days as I continue to work through this post (updating any grammar or syntax mistakes) and adding new parts to the series. I’m sort of rushing this out because I’ve blabbed about it for so long. It’s premature, and not proof-read so please bear with me. Until the next part…
Peace Party People, haha! See you later!!! ✌🏽

You can find the source to this tutorial here: https://github.com/cliff76/FaceboxGQL

Also you can find a video tutorial series covering the same project.

What’s new???


What’s new party people???
It’s been a long time.
I shouldn’t have left you…
Without a blog post to step to…

I’ve been on a social media hiatus for a minute and I’m ’bout to step out the shadows. While I do that I wanna talk about some new stuff I’ve been looking into. I also haz questions about how the new compares to the old. Let’s begin, shall we? Oh, wait… HAI, I IZ Cliff. You here because you really wanna party with me. So put your source code where my eyes can see. Now that we got that out of the way, let us continue.

Flutter
I former coworker buddy of mine (he knows who he is, LOL) asked me about Flutter a couple of days ago. While it sounded new I kinda remember looking at it some years ago and thinking, “that’s sorta cool!” I never did anything with it at the time but now I’m taking a fresh look. I felt challenged by Mr. Coworker-friend and I thought I’d open this blog post by throwing down my gauntlet and saying, “I hereby formally acceptith thine challenge fine sir!!!” (I have no idea why I’m speaking in that form, by the way.) Flutter is a cross platform development architecture developed by Google that uses the Dart programming language. (Wow, it’s been a long time since I’ve done mobile!) It attempts to make UI development a breeze while facilitating the write once run everywhere philosophy. It’s sorta new but it also sounds like some other technology released in the 90s which was supposed to enable Write Once Run Anywhere. Anybody know what that tech was? I’ll give a hint, it’s still actively used to develop mobile apps! I digress, and I’m not hating, haha. I’m excited to see what’s going on with Flutter now. Look for one or two blog posts about how awesome or totally whack Flutter is as I attempt to dive back into mobile.

Chromebooks
The thing that actually prompted today’s post was this Chromebook I saw on the Amazon truck this morning. This is not new tech. This is not even considered old tech. I honestly think it’s confusing tech. I’m confused because I never really used a Chromebook. What I wanna know is are these things practical? They remind me of the old Netbook things that were popular around the 2010-2011 timeframe. The more important question is how does a reasonably priced Chromebook of today compare to the ridiculously overpriced Pixel Chromebook from years ago? Are we at the point where a modern Chromebook is equivalently spec’ed to that beast? Also, why was that product so expensive? Was there any practicality in that?

>>> Sidebar
I just did a little homework and apparently the original Chromebook hit end of life last August. The 2013 initial release has a 2560×1700 resolution on a touch display, a dual core i5 with HD Graphics and all day battery life. Even with all of these specs I could never understand the price tag as it smells like an overpowered web browser in a hard shell. I dunno, call me old school, but I feel a personal computing device (especially with that much horse power) ought to be useable for more than internet based tasks. In other words, it should have a ton of support for offline work, run a dump truck load of locally installable apps that can do anything from video editing to word processing to Minecraft.

Back to Chromebooks in general, Google has updated Chromebooks available to a smaller pice tag than the original. There is the Pixel Slate and the 2017 Pixelbook and apparently another Pixelbook in the making. The Google offerings are still much pricier than other Chromebooks on the market but in the end I can bring myself to care.

Chromebooks are not new. they are a five year old solution to a problem that people still have to this day… multi-device drama. Nobody wants to carry a laptop, a tablet and a Smartphone. Pixelbooks try to oversimplify the laptop problem while delivering a tablet that still doesn’t blow away the iPad. Ultrabooks also try to deliver a convertible laptop that doesn’t work as well in tablet mode. Even Samsung has tried to address this by supercharging their Smartphones and developing a cheap piece of small hardware you can use to dock into a keyboard/mouse/monitor. I guess the newest news about development in this space is the apparent absence of any clear winner or best solution.

Buck Build
Build systems are cool, aren’t they? If you’re a rookie developer you may not know what a build system is. It is essentially a set of programs, frameworks, and/or tools that turns the programming language you type in your editor into an installable product made up of 1s and 0s. In this space you have a variety of choices from Make/CMake and Ant, to Maven/Gradle/Rake, to npm/yarn/webpack/gulp. Each tool or combination is suited for a certain type of programming language and environment. Most of the newer options borrow from one another and employ the same basic concepts. However, there is a newer build system growing in popularity over the past 3-4 years. Buck Build is an open source build system developed at Facebook which takes a unique approach to not just building our source code but also how it is managed overall. It’s refreshing to see innovation in build systems. There hasn’t been much disruption since Maven entered the scene in the early 2000’s and began to overtake Ant. Buck is what some of the bigger companies use to manage huge projects. The interesting thing about Buck is that I believe it began life at Google (possibly as a series of hacks to the Gradle build??? …but I dunno, I’m speculating w/o research) and was copied by the original developers as they moved to other big companies. It has siblings, Pants (built by Twitter), Bazel (built by Google), and Blaze (the original project these were all based off of).

The most interesting thing about buck is that it operates on the concept of the mono repo. This is a single repository that contains all source to all projects/dependencies. This is the thing that jumped out at me as odd, disturbing, even outright disrespectful. You see, I have taken pride in evangelizing the idea of separation of concerns throughout most of my career. It seems to make sense and be applicable everywhere. Separate your model from your view from your controller logic, separate components/objects by interfaces, separate web services as micro-services, separate your white laundry from your color laundry, the separation of church and state, and of course most importantly separate your source into different repositories! Facebook and many others are now moving in the opposite direction and it’s taking a minute for this old-timer to catch up. (See how React combines UI HTML like syntax with JS controller logic? It was another similar shock to me.)

The modern big company problem is managing all of the thousands of projects and dependencies in an efficient manner. It always comes down to loss of developer productivity. How much time do you spend building your software versus developing it? See, in my mind it makes sense to have a separate repo for each component with established interfaces between each one and an artifact repository where dependencies are pulled from. That way you only run your compiler over the things that actually need to be compiled. If project A hasn’t changed and depends on component B which is changing frequently why do I need the source and compilers for project A? Why waste machine cycles spinning my compiler over all of the source? Apparently Buck addresses this in a way that is both unique and incredibly fast.

As much as I hate the idea behind a mono repo and as sour of a taste as this project left in my mouth initially I am super excited to try it out. I suppose this was the only real new thing I wanted to babble about today. (Well that and Flutter. I definitely wanna try Flutter too!) I haven’t been this excited about an open source tool since I discovered Groovy many years ago. We’ll see where this ends up.

Voice to text to alternate text to voice


I’m playing with old-school tech… the kind that got me so heavily entrenched in my career originally. I call it old school because it’s been around for ages now but it’s still new to me. Every time I use it I get a twinge of excitement. Anyone who knows me knows the thing I’m talking about… It’s voice technology. Hi, I’m Cliff. You’re here because you’re probably just as excited as I am over voice tech.

Today I hit a milestone in this hack-a-thon project I’m collaborating on. The milestone is reminding me:

  • I’m old.
  • I still got it!
  • I’m old!

It’s such a simple and easily achievable milestone but it’s taken me 2-3 days to get here. (To be fair, I’d been working after hours just before bedtime devoting about 1 hr/day.) Still I’m starting to feel that rush I felt way back when I had my 1st iPhone talking/listening to me. Back in the day I was known for throwing a solution together in 2hrs, full stack including client and server. Today I’m head over heels just to have a “Hello my translated world” page working! This is indeed my passion project!

Over the years I’ve tinkered here and there with voice to text, text to voice, text to text, and voice to voice implementations. It’s pretty cool what you can do when you sink enough effort into the stuff. For eg. my first working solution involving a conversational app on iOS had me learning about eBNF syntax (which is sort of compiler tech). My other tinkering involved everything from telling my “homies” in Stockholm “What’s happening!” in an EN -> SV demo for BDD development, to giving my Cozmo bot some ears and a personality.

Today, I managed to drill in to Azure cognitive services and do a rough English to Korean translation in a nodeJS app. Tomorrow I plan on integrating this into Bixby for this Hackathon thing that I’ll probably lose. I’m not so interested in winning the hackathon as I am in using and sharing the technology. I’m going to try to work with the Samsung dev relations team to get a working example posted online for all of you to see. For now, keep your radio… err.. station… umm… browser??? Yeah, keep your browser locked to this umm… station/channel/address/whatever! I’ll be back with more updates!

IWOMM (It Works On My Machine)!


Say you have a test suite for your project. Go ahead, say it… right now out loud so a random passerby can hear you and begin rumors about the person muttering nonsense to themselves. Now say the test suite is all nice and well factored giving you 80+% coverage on your project. Say you’re proud of the test suite and then say you add a new test case to it that works beautifully until it makes its way over to either the build server or a colleague’s workstation. Hi, I’m Cliff. You’re probably here because you have a broken test case. I’m here because I’ve been there and done that.

I was recently in a discussion with a coworker when this topic came up and I thought I’d post a little something about it. The frustration most people feel from that random test failure that always works so well on my machine is that it seems incredibly hard to track down the problem. If you know a couple of basic principles, however, it can make finding and fixing these failures as easy as… as easy as the Kool-Aid man bustin’ through your living room wall. “Oh yeah!” Technically, forcing a glass pitcher full of red liquid through plaster (or concrete like in the 80s) without spilling a drop is not easy but I’l leave that as a physics exercise for the more adept among us.

“But it works on my machine!!!”

…they always say in disgust as the green bar flips to a shade of crimson indicating they are not as careful as they claimed to be during the pull request. It’s okay though. It happens when you make a change like adding a new test or adding logic to an existing test which creates a random failure in a test case that has absolutely nothing to do with your changes. The problem many times is not the fault of the additional test or test mods that seem to trigger the failure. The cause is almost always one of two things, which I’ll elaborate on now.

Check your environment!
Many test cases have a tendency to rely on a particular environment. Maybe it’s an environmet variable, a version of the Runtime (nodeJS, Java, Ruby, etc.), or even a compiler optimization. If the environment where the failure occurs differs even slightly from yours then it’s highly possible that the slight variation is triggering the failure. I recall a Java test which ran a version of Tomcat failed because of the ImageIO library which was (or was not) present on the build server. The JVM was 1 point revision ahead (or behind, I can’t recall exactly) my local version. There were times that I discovered the presence of an executable in the system’s $PATH environment variable would cause it to be picked up during the test and dramatically change the behavior. This can happen when you run on OS X vs. Linux.

Even something as subtle as a configuration file difference can have an affect. Some applications make use of local machine specific configuration files which are used to locate things like a database, an XML parser, or a virtual machine. The database itself is considered an environmental resource. Though you should never have unit tests which depend on a database, many people make a habit of it. Anything coming from a local or external database should always be considered first when trying to isolate a failure.

Order of Operations
These are my favorite problems because they’re dirt simple to fix! In a perfect world, your test cases should be able to run regardless of which order they execute in. However, in practice, this is often not the case. Check the actual order your test cases are run in. If the failing test passes when run on its own or when run in a different order, then the failing test itself may not even be the problem. The problem is in a test that is running before the failing test. Consider the following:

Your machine
A, B, C, D, E, F -> ALL PASS

OTHER machine
F, E, C, D, B, A -> D FAILS

The problem in this scenario is immediately obvious to me and would take me all of 5 minutes of dev time to fix. Can you identify what to fix?

Either test cases E or F is causing a failure in D because D runs successfully when they are not run first. I would run F before D then E before D to see which of these (possibly both of these) cause the failure. Once I’ve identified the offending test case I would simply add a tearDown step, which reverted any state established during either the setup or the test cases. Usually this is a very easy pattern of matching any file, database, or socket open cal with a corresponding close in the teardown. It’s followed by nulling out any global variables, and resetting all mock objects. On very rare occasions I have to tilt my head and squint a little harder to find the problem.

Take the above example. Let’s say running test D passes when it runs after E. Immediately I know that test F is the offender. I would run F, D to confirm. I would then open up F and look for any global variable use. In Java this would mean anything that is static either in the test case, the tested source, or any test helper files. I would make sure these global variables were set to null in F’s tear down method. (If F doesn’t have a tearDown method I would add it.) I would then look to see if there were any mock objects used in F. Failure to reset and then even null out mock objects can trigger problems because sometimes mock frameworks can make use of globals under the covers.

Lastly, I would look for any external resources used such as files, databases, and/or network connections. You should never use these in a unit test case but there are times when you feel the need to include them. I would reset the state of any external resources in tearDown then re-run the offending tests in the problematic order. Usually I don’t need to look any further and all my problems go away, but on occasion…

Concurrency
Test cases which make use of concurrency are the most difficult to fix. Usually I use a recipe of unwinding the concurrency and making the test synchronous. This is an exercise I’ll have to explain in another post. In short, it means decoupling the concurrent piece of your logic and testing it separately. This part is usually in the form of a callback, but again, I’ll have to cover that separately. (It’s a little more involved.) In fact, this is the only true way to fix concurrency triggered failures. Every other approach is merely a hack. Many people do tricky things like adding sleeps/delays and timers. Some people go as far as to move the asserts out of the concurrent block of code! This is extremely inappropriate because you actually change the nature of what you are testing. If you can’t take the time to unwind the concurrent piece you’re probably better off removing the test case entirely. It’s not entirely right but a missing test case is always better than a faulty test case.

That’s all I have for now. I hope this long rant makes sense to somebody and/or saves time debugging. In summary you should follow the principles of testing code at all times.

  • Never let your tests depend on an external resource like a file, network connection, or database.
  • Do not use global variables in your test code or your production code.
  • decouple your concurrent logic from the place it’s being called from. (This actually means refrain from using inline callbacks and/or anonymous inner classes or lambda expressions.)

IF you follow these rules and also make sure you regularly run your tests in random order ad code a decent tearDown method in each test then you should rarely, if ever, experience an IWOMM error.

God, Code, and what’s that gotta do with me?


What do you call it when something unique and surprising happens to you? A coincidence? What about when something else even more unique and surprising occurs? What happens when a bunch of events, each one more unique and unexpected happen in series? Hi, I’m Cliff. This is my story which involves a unique chain of events which happened to me. It resulted in my having consistent access to people and organizations that many do not have while landing me on several journeys where I experienced different people and cultures. It is a story that can best be titled, “I ain’t ‘posed to be here!”

First, some background is in order. I don’t have a college degree. I am not the most handsome person in the room. I am not smart, wise, or even witty. I do work out on occasion but I am by no means strong. (I struggle, like Damon Wayans said, with 90lbs on the bar.) I am that kid from school that should have been picked on way more than I was but somehow always got over.

I want to share my story to clarify why I don’t take credit for anything that has happened to/around me. This is solely my account of what I observed in 40+ years of sucking in oxygen and expelling CO2. I recently posted about my 20 year anniversary in my professional coding career. I explained how my wife and I came up from next to nothing. What I didn’t explain was how blessed I’ve been ever since.

In my years since becoming professional I’ve managed to have some incredible experiences. I started at a company called Seagull Lighting in Delran, NJ. There I worked on an AS/400 system and built out a client/server parts & inventory GUI using VB6 and ODBC to run stored RPG procedures and pull records back. I was then fortunate enough to get my first relocation package and move my family out of the tight 1 bedroom apartment to a slightly less tight 2 bedroom apartment as I worked at R.R. Donnelley & Son’s. There I was part of a major $10 million project which resulted in a new Donnelley facility being erected right off Rte. 30 in Lancaster PA. It was my experience with VB and stored procedures which created this business opportunity. Also, it was my earlier streak of coincidental fortune that allowed me to have this unique experience. I won’t elaborate but the short story was that the deal almost didn’t happen and it came down to the wire when I had a MAJOR break through substituting the OLE/DB driver with ODBC and literally saving the day, the deal, and probably several folks jobs. $10 million dropped into the business which came directly out of Jesus’ bank account.

Fast forward a few years later where I worked at a small startup. I didn’t do anything as phenomenal as landing another $10 million deal but I did work on some cutting edge (at the time) tech. Everything from Java RMI, to Linux, to EJB/Servlets. I wrote a lexer/parser which parsed and rewrote Java Source files injecting a special String attribute in each and every class including inner and anonymous classes. It was part of a Java WebStart like tech which had a backward scaling feature Web Start lacked. I can go into way more detail but let’s just say I got into some really crazy JVM based shenanigans at this company. The funny thing was that there was never any pressure. I was mostly free to play and explore.

Years later I was blessed to work for MapQuest where I got into mobile. It was there around 2009 when I wrote an early Siri prototype using (ironically) the same voice that eventually became the default Siri voice. (This was the Samantha voice package from Nuance.) I did this work years before Siri, before there was even any voice capabilities on iOS. It was a passion project I worked over a year on. (I quietly started the prototype on Blackberry before iPhone was released.) By now I established a pattern of playing with random tech and I kept being fortunate enough to have the time and resources to do so. I had the most elaborate setup of any developer in our office and I was sooo not senior! The Text To Speech part of my concept was eventually worked into the first ever free voice guidance app released on iPhone, MapQuest 4 Mobile. This was another gift from God’s bank account, not my doing at all but entirely His work. I also did a bunch of other random/crazy stuff there like naming and animating the MapQuest star logo trying to make a recognizable character out of him like the GEICO Gecko, trying (and failing) to start an ad campaign slogan, “There’s a Map For That” and striking out on a hackathon because I was more obsessed with TDD than the app we built. LOL, Good times!

I was eventually relocated (for REAL this time) to Silicone Valley when I got hired by Skype. I did some amazing work there as well but much of it went unrecognized. I built the first Video Mail prototype working with a team out of Russia. I brought my earlier Siri prototype to a hackathon that never happened, I eventually won another hackathon building a Skype voice assistant, I built an early prototype of a realtime language to language translate feature into Skype for Android, and other some stuff. This is when God started to show off as I travelled around the world frequenting places like Sweden, Estonia, Amsterdam, London, and Moscow. I was also part of a Skype promotional event where I got to meet and hang out with the cast from the Marvel movies on a red carpet event. I watched both Captain America Winter Soldier and Guardian’s of the Galaxy premiers in the theater with the actual actors… in the same breathing space! I was invited to the after party. This was on repeated occasions.

Since moving to Cali I’ve been fortunate to work at many of the big name tech companies, Microsoft, Apple, GE, and Samsung. At each company something amazing happened.

Traveling the globe, hanging out with celebrities and the like are activities reserved for those attractive, brilliant, talented, muscle-bound, or those with any combination of these traits. The opportunity to meet and shake hands with the actual super heroes I grew up idolizing sounds like something straight out of God’s playbook.

I am merely summarizing a small amount of the highlights that come to mind when I reflect. Each one of these moments was either initiated or spear-headed in prayer and Bible study. As a software developer I’ve learned to be pretty keen at observing patterns. The one pattern that has been 100% consistent is where there was prayer in my past there was also huge blessings. Truth is there are far more memories and events that I could include. Also, I am really not that good at computers… really! I’ve failed more tech interviews than I can count and have made a fool of myself with some of the most basic tasks professionally. There are details behind these above stories that sound straight out of a Hollywood blockbuster event. There are also some dark moments that are equally as dramatic. I’ve learned to be just as grateful for these moments for reasons I can’t explain without rambling on for several hundred pages.

When I look back there’s only a few ways I can described what happened and continues to happen to me over the years. I can say, “golly gee wiz! That was an amazing string of coincidences!” But that would just sound stupid because nobody begins their sentences with “golly gee wiz”. I COULD say, “I did it all by myself because I am this super awesome attractive tech guru who has a way with computers and whom nobody can say ‘no’ to!” That’s also silly because, as I’ve said, I failed more interviews than I can count and also I was recently caught squelching exceptions with silent catch blocks *yikes* in professional Java code. (Plus I’m not attractive.) Instead I will just continue to say what I’ve been saying all along. I am blessed.

Happy Friday!


Another week down and I’m trying to motivate myself for some weekend developer activity. I’ve been out of it lately and taking a hiatus on #SaturdayCoding but this has to stop. Hi, I’m Cliff. You’re probably here because you write code. (It’s either that or you know me from high school or seen me at church. If you’re part of the church crowd don’t run off! You might pick up something useful!) On the drive home I was seriously thinking of picking back up on Android development. Then when I hit the front door I began considering my unfinished robotics projects. It’s so easy to get distracted, however I feel like I should re-acclimate with Android Studio as it’s been so long!

The last serious bit of Android code I wrote was close to a year ago when I was helping to build out an SDK. I’m not even sure what an Android app would look like these days! So much has changed. Should I continue to use Java? Should I tackle my next project in Kotlin? How about writing a game in Lua?? I do have some overdue work on another hybrid app I started a while ago. That means more Javascript/HTML which has all of a sudden become the only tech I use these days. (I never thought I’d be so heavily into browser tech but here I am!) The project also involves a nodeJS backend which I might try to tackle with Ruby/Rails instead… just for practice. All these languages feel so long in the tooth. I desperately need a change. I am so itching to do something with Lua embedded in a C++ thing with some OpenGL touches for good measure.

I know! Maybe I could dive into the Android NDK and stretch my C++/Lua wings there! Or maybe not… This hybrid project is calling me. It’s a promise I made to a friend that I shouldn’t let sit too much longer. Enough rambling for now. I really should to fire up one of my IDEs. Which one do I choose???

[ ] IntelliJ
[ ] Android Studio
[ ] Xcode
[ ] VSCode

Something’s wrong with my Pumas


Maybe it’s ironic that I decided to wear my Puma sneaks to work today, on the very same day I get into trouble installing the puma gem. Maybe it’s ironic that I tweeted something whacky about my Pumas just yesterday. Maybe it’s also ironic that Alanis Morissette decided to collaborate with me on this blog post. (Okay, that last point isn’t totally true.) Hi, I’m Cliff. You’re here because you don’t wear Puma sneakers. I’m here because I spend so much money on my family that I can’t afford to upgrade my kicks.

Just the other day I felt like I finally mastered RVM/Ruby/Rails. Today I hit another snag. Hold up though, this snag wasn’t as bad. You see, I think the Lord allows us to have trouble installing software like RVM in order to prepare us for the devil that comes out when you start installing other software like the puma gem. If I didn’t pay particular attention and read my error messages I would be scratching/snatching another bald spot into my scalp. The error I was getting was in the middle of a bundle install when it hit the puma gem. The last part of the error was:

3 warnings and 4 errors generated.
make: *** [mini_ssl.o] Error 1

make failed, exit code 2

Gem files will remain installed in
/Users/c.craig/.rvm/gems/ruby-2.3.3@tutorial/gems/puma-3.4.0 for inspection.

I could have figured it out from this text alone but I looked further up in the error messages just to be sure.

/Users/c.craig/.rvm/gems/ruby-2.3.3@tutorial/gems/puma-3.4.0/ext/puma_http11
make "DESTDIR="
compiling http11_parser.c
ext/http11/http11_parser.rl:111:17: warning: comparison of integers of different
signs: 'long' and 'unsigned long' [-Wsign-compare]
  assert(pe - p == len - off && "pointers aren't same distance");
         ~~~~~~ ^  ~~~~~~~~~
/usr/include/assert.h:93:25: note: expanded from macro 'assert'
(__builtin_expect(!(e), 0) ? __assert_rtn(__func__, __FILE__, __LINE__, #e)
: (void)0)
                        ^

This looked suspiciously like the error I got the other day. The problem seemed to be related to compile errors where the compiler was using the wrong version of openssl headers. I tried to use the same command line flag to see if the error would just disappear but that wouldn’t work.

gem install puma -C --with-openssl-dir=$HOME/.rvm/usr

Then I thought I would just upgrade to a different/newer version of Ruby instead of using the problematic 2.3.3 version that first introduced the problem. After upgrading to Ruby 2.5 I still saw the error. After a bit of Googling I found that I just needed a slightly different variant of the command line flag to point the gem installer to use the same openssl headers I installed for Ruby 2.3.3. The command to install the puma gem turned out to be:
gem install puma -- --with-opt-dir=/path/to/custom/openssl

I ran that and my errors went away! All in a day’s work! The lessons learned here are

  • Don’t panic when a given gem fails to install without errors.
  • Read your error messages carefully even/especially when you don’t understand them.
  • Don’t wear Puma sneakers to work.

Of all the bullet points the last one is most important.

Can’t see nothing but the source code, I’m trippin’


We begin another year with more Ruby/RVM drama. First, a recap: In our latest episode, filmed last Friday, we discovered a problem with Ruby 2.3.3 and openSSL on OSX.

Me: rvm install 2.3.3

OSX: Nah, bruh! You are not slick enough, nor do you possess the lyrical skills to master this pristine version of Ruby. Get ya weight up!

Me: What the???!!!

Me: rvm reinstall 2.3.3p222

OSX: Srsly bruh? I said nah! Come back when ya game is tight! (Oh and your hat is plaid out!)

Me: How does this have anything to do with my game???!!!

Me: rm -fr ~/.rvm

Me: [repeats above steps with slightly different “game”]

Several iterations later we found the magic to satisfy some build-time macro that was getting in the way of the installation “game”. Hi, I’m Cliff. You’re here because you ain’t got any programming “game”. I’m here because I actually thought my baseball cap look was stylish when it really ain’t. In either case we now have a working Ruby installation but no way to properly install and run bundler. For those who don’t know, bundler is a Ruby gem that works similar to npm in NodeJS. It manages dependencies.

I’ve spent the better part of my morning arguing with my command line in failed attempts to bundle install anything. What I get are various errors indicating a missing executable in some user directory of some sort. It’s not immediately clear:

You're whack and I cannot load such a file... $HOME/.rvm/rubies/ruby-2.3.3/lib/ruby/gems/2.3.0/gems/bundler-1.16.6/exe/bundler (LoadError)

I’m paraphrasing the error slightly, but you get my point. This comes after a clean RVM install of Ruby 2.3.3, which I just learned is “a ruby that requires 2 patches just to be compiled on an up to date linux system.” I was so generously warned about this post install. It’s also after a successful gem install of bundler with that same Ruby. So I was thinking it’s as if my environment is set to use some default or global version of bundler. The confusion is that bundler is not installed with the 2.3.3 Ruby I pulled using RVM. So I’m not quite sure why it’s looking in that path or bundler. I don’t have enough Ruby experience to know the difference between user installed, system installed, default and/or global gems so then I was stumped.

Believe it or not, I got all the way to the above sentence where I said, “I’m stumped” before the obviousness of my problem jumped out at me. It really jumped off of my LCD display and slapped me across the face like, “C’mon homie! Don’t act like you can’t see what’s wrong here!” I was issuing all of my bundle commands with a trailing r! I know I’m not the only person who confuses "bundler install" and "bundle install"

Installing software that installs software to write software


I should be home right now but I can’t leave the office until I document what felt like an extreme waste of time. Hi, I’m Cliff. I write software. Sometimes I write software that installs software. Sometimes the software installed by software I originally wrote is intended to install other software which I’m not clever or available enough to write. Sometimes the software installed by the software installed by the software I originally wrote… are you following me? It’s cool if you’re not.

Today’s pain came from RVM and Ruby version 2.3.3 on OSX. I hit a bug that I am almost certain I ran into before but I don’t have any evidence of it on this blog. Today I am generating the evidence. Github user TheZoq2 solves the mystery in this thread.

Error running ‘__rvm_make -j8’

In short, This particular version of Ruby depends on openssl headers that don’t match the openssl installed by the Xcode developer tools. The solution is to install the Ruby openssl package, and reinstall Ruby 2.3.3 while pointing the C compiler to its headers.

$ rvm pkg install openssl
$ rvm remove 2.3.3
$ rvm install 2.3.3 -C --with-openssl-dir=$HOME/.rvm/usr

Short story lengthened… You wouldn’t know that from the above error. The only way you could figure this out is by reading the entirety of the error message. Somewhere after the rvm install fails it tells you to look inside a log file for details. Somewhere inside this log file you’ll see evidence of the make build system choking on macro arguments and the like.

So, in summary… if you wanna suck at programming, install Ruby version 2.3.3 via rvm. You won’t burn down your development machine doing so but you also won’t have any hair left on your head, eyebrows, or eyelids after several attempts to make sense of what “-j8” means.