Service Workers With Create-React-App


These days Service Workers are the latest craze in web apps. They are specifically in the area of Progressive Web Apps (PWAs). I’m futzing with a React app trying to get my own custom Service Worker logic to run and I hit a snag. Hi, I’m Cliff. You’re probably here because write web apps. (I still think it sounds funny to say “write web apps”. I think of physically writing with a pen and a pad when I say “write” and nobody writes any programming logic with anything other than a source editor… unless you’re extremely hard-core! In which case, I think there should be a contest on the most advanced computer program actually written with traditional tools, like a stone and chisel. Am I still typing under parenthesis?)

Yesterday I realized that the popular automation tool, “Create React App” or CRA for short, was never intended to be used with anything other than some caching service worker logic. If you try to do anything beside offline support you’re probably going to need to “eject”. If you’re new to React, “eject” is the last thing you want to do. To “eject” means to own all of the configuration that CRA offers which means you are now in “I know what I’m doing, don’t tell me nothing!” mode. In short, it’s not a good mode to be in because when you get that far you are basically writing your own Create React App tool.

At any rate, my task was to figure out if/how we could use a service worker to enable some sort of push notification support in our app. I’m still working through the details so I should probably wrap this up and get back to it. Before I close I did wanna share a couple of nuances. First, you won’t be able to easily add your own service worker logic to a react app for a few painful reasons. The registration script, which looks so innocently easy to change on the surface, actually locks you into using ONLY the service worker which is generated by the CRA tool. It looks to see if your browser has support for service workers then it tries to fetch the service worker by name. You can change the name of the file it attempts to fetch but then there’s the wrinkle of how to get a custom Javascript file to be individually pushed to the output folder without being webpack-ed in with the rest of your React files. I hacked one directly into my build output folder which sorta works but then… THEN… there’s this OTHER wrinkle! The JavaScript file, when fetched, has to be served with a Content-type HTTP header that includes “javascript” somewhere in it. Here’s where it’s ugly because there is a noop-serviceworker-middleware plugin included with the dev server and support stuff in webpack which always serves the file with Content-Type equal to “text/plain”. I dug deep in the source and found that it will ONLY use Javascript in the content type if the file name is EXACTLY equal to “service-worker” and this value is a HARD-CODED string! Now I could remove the check from the register ServiceWorker script but it just feels ugly at that point.

Time is going by and I should get back to the salt mill and figure out how to make all of this play nicely together. In the meantime, if you know of any CreateReactApp hax or work-arounds that would make life simpler please speak up!

Hi Super Nintendo Chalmers, I’m Learnding!!!


It was one of those surreal moments when I was looking at some idiomatic JavaScript code illustrating the newer features of the language when I couldn’t follow the syntax. Then it clicked, I had a Ralph Wiggum moment. Hi, I’m Cliff. You’re here because I just learned how to spell cat. I’m here because I like the way the cat meows.

I’ll give you an example of some Redux code that tripped me up:

import React from "react";
import { connect } from "react-redux";
const mapStateToProps = state => {
  return { articles: state.articles };
};

const ConnectedList = ({ articles }) => (
  <ul className="list-group list-group-flush">
    {articles.map(el => (
      <li className="list-group-item" key={el.id}>
        {el.title}
      </li>
    ))}
  </ul>
);

const List = connect(mapStateToProps)(ConnectedList);
export default List;

My understanding was just based on React. In React I could recognize ConnectedList as a functional component, or a function that returns the rendered part of a React component which does not need an entire class definition. I could also make sense of the redux magic that connects the state’s articles object to the articles property of the ConnectedList. What I couldn’t follow was the absence of any props parameter or references in the ConnectedList. I also was a little blurred on the initialization of the List object just prior to the export. It all looked a little foreign to me. Bear in mind, I’m still coming from a “Just Learned 1998-style JavaScript Last Night” background and the whole ECMA6 ES2015 stuff is still taking a minute to bake in. then it just clicked… automatically! I instantly recognized everything that was happening as if I had been working in this environment my entire career! Let me try to decompose this block and explain how I understood it.

ECMA6, destructuring, arrow functions, etc.
There are many new features in ES6 which give you enhanced expressiveness in your code. These features are borrowed from or inspired by similar features in other languages. Having several years of experience in other languages which expose features like closures, default return values, and collection destructuring, I tend to pick up on patterns. For eg. a piece of code that looks like this conveys a certain meaning or intention:

accounts.map.(each => each.name + ', ' + each.phone)

It intends to change a collection of account objects into a collection of strings of name and phone numbers separated by a comma. It doesn’t fit my 1998 JavaScript way of thinking but my RxJava/Groovy/Kotlin/Swift/Python brain pulls pieces from its database and fills in gaps. These gaps are put in place by my early ECMA6 understanding. You see I know ECMA 6 adds a bunch of modern features and semantics but I’m not sure which ones. I know these other technologies include features like map/reduce. They also include other niceties such as being able to define a closure in place, optionally omit the enclosing curly braces, and optionally omit the return keyword. If I look at the outer piece as a vanilla Javascript function call that takes some newer Javascript syntax I get this:

accounts.map.(/*magical Javascript new stuff*/)

That much I can comprehend. It says we are probably using some newer map/reduce coolness to iterate a list of accounts. Nothing too intimidating, right? Now lets take the inner guts or magical Javascript parameter syntax and try to make sense of it.

each => each.name + ', ' + each.phone

This looks a lot like a Closure or anonymous function. It’s weird because there’s nothing around the naked initial each parameter, but that’s okay because many other languages make defining the function parameter with official types and parenthesis optional so maybe it’s optional here too. Also some languages make enclosing curly braces around the function body optional. It’s called syntax sugar. so I’m thinking the curly braces are optional in ECMA6 too. Lastly, other languages make it optional to specify the return keyword. Instead it is accepted in some languages to use the value of the last statement in a function as it’s return value. With that understanding in mind I can read the above as:

(each) => { return each.name + ', ' + each.phone; }

Which actually makes me understand the original single line statement as:

var closure = (each) => { return each.name + ', ' + each.phone; };
accounts.map.(closure);

Now that you get an idea how my multi-language brain operates let’s get back to the first piece of React/Redux magic that threw me. We have this:

const mapStateToProps = state => {
  return { articles: state.articles };
};

Which we understand as a closure or an anonymous function which is assigned to a variable named mapStateToProps. This function takes a state parameter and returns an object with an articles property which is set to the value of the articles property of the given state object. In essence, it merely transforms a state object into an object with a single property which is the articles property from the state.

Next we have this code, which really troubled me:

const List = connect(mapStateToProps)(ConnectedList);

The 2nd set of parenthesis triggered a thought. The Groovy programming language side of my brained jumped in here and said, incorrectly, “Ooh! I know what this is!!!” You can pass a closure parameter to a function outside of the closing parenthesis!!!” Then this part of my brain smiled intently just prior to running off to grab a donut. This was totally incorrect. Instead what’s happening is we are calling a function named connect and passing our anonymous function value as the parameter to this function. The connect function is actually returning a function, which we call in place and pass the ConnectedList react component. So, in actuality, connect is transforming the React functional component into another functional component and it is using the mapStateToProps function to do this.

But wait! What about this piece of code?

const ConnectedList = ({ articles }) => (
  <ul className="list-group list-group-flush">
    {articles.map(el => (
      <li className="list-group-item" key={el.id}>
        {el.title}
      </li>
    ))}
  </ul>
);

It almost looks like how I remember React functional components would look but there’s not props parameter. Instead it uses this weird syntax:

const ConnectedList = ({ articles }) => (
  /*JSX stuff here*/
);

Again, we’re missing enclosing curly braces and a return statement. We can mentally add those and come up with this:

const ConnectedList = ({ articles }) => { return (
  /*JSX stuff here*/
)};

Still there’s the weird parameter definition, “({ articles })”. My Python/Kotlin brain center kicked in and reminded me about object de-structuring. These languages allow you to decompose tuples or objects and literally declare and assign multiple variables in one shot. The pattern looks like this:

var {name, address, zip} = profile;

which is equivalent to:

var name = profile.name, address = profile.address, zip = profile.zip;

This is not the exact syntax from these other language, just pseudo code to illustrate the pattern.What this code does is take the name, address, and zip properties of the profile object and assign them to the name, address, and zip variables respectively. That’s a brief explanation of de-structuring. So maybe what’s happening is the props parameter is being de-structured into an articles variable. With that understanding we can read the original code as:

const ConnectedList = (props) => { 
var {articles} = props;
return (
  /*JSX stuff here*/
)};

This is just a little more syntax sugar that cleans up the noise of what would be a bunch of “props.” references in your code.

That’s a bunch to take in at once but it’s how I was able to make sense of some initial Redux examples without actually looking up the new features from ES6. Ultimately I referred to the new feature list to confirm my understanding but I was able to go a long way on my own.

(This message was brought to you in part by NBCC Community Church in Redwood City. Through constant prayer, Bible study, fellowship, service, and teachings from Pastor hurmon Hamilton one can understand even the most challenging programming environments. Other sponsors advise you to eat your veggies and always get at least 7 hrs of sleep nightly.)

Learning React!


I have the rather interesting job of teaching folks at work how to use ReactJS, the latest new web tech. It is interesting because I am no expert. I feel like I just learned it yesterday! Hi, I’m Cliff. You’re here because you probably know React better than I do. I’m here in a feeble attempt to learn you how to use it! It’s actually not unusual for me to insert myself as the resident expert on a technology that I barely/rarely use. It’s exactly how I developed my 1st iOS app.

Short Story Lengthened…
Many moons ago I was nominated as the lead engineer for a leading edge iOS app months before the 1st release of the iOS SDK. Back then I was a Java programmer and only knew Objective-C as some random objective for C developers to strive for. I was introduced during an initial call to some product and executive guys as “the team’s resident iOS expert” and given the task to develop a flagship product. (I wrote a blog post about it a few weeks ago.)

At any rate, I’ve started a template project around some work I actually planned to do for my church. I’m sort of trying to kill 2 birds with 1 stone. (It’s also interesting that today’s sermon was on “casting stones gently”. I’m paraphrasing here and if my pastor ever read my blog then I’d probably be sent to detention for missing the point of the lesson. I digress.) I will be uploading my template project with some notes to github in case any one of my 5 dedicated blog subscribers want to follow along.

Oh, by the way, you probably noticed this site got another template switcharoo. I felt it was past time to update and clean things up a bit. I had a bunch of dead links in my side panel and a lot of craziness that no longer makes sense. I also thought the template looked better than what I had been using. I don’t know the woman in the header. She looks rather serious and came with the template. I can’t tell what editor is being used. It looks like either Sublime or VSCode. It also appears like she is a web developer as the syntax resembles HTML through the blur. At any rate, I have not been checking or responding to comments over the years or maintaining the site. I want to change that. I want to get back to how I used to post regularly and bug out with you all. for those of you who leave comments only to see them appear a year later with a reply, I apologize! Please don’t dislike me! For those of you who are relatively new here, *Ahrm* I’ve never left a comment un-moderated and always reply within 1 business day or less! For everyone else, well, happy coding!

A CONSTant pain!


What’s worse than poking yourself in the eye with a fork? Learning to code in C++, that’s what! Hi, I’m Cliff. You’re probably here because you’re looking for info on how to write software programs that run on things like cell phones, TVs, robots, etc. I’m here to sell you an eye patch because of the eye-poking happiness I just endured these last few days. Seriously, I will sell you an eye patch for the low, low, price of $4.99! (Why do marketers and commercials always repeat the word “low” when trying to strike a deal? It’s as if the extra occurrence will trigger a spontaneous shift in perception and cause you to think, “Geez, this must be really deeply discounted!”) I will sell you the patch because you will need it. You will need the patch because you will poke yourself in the eyeball. You will poke yourself in the eyeball because it’s slightly less painful than what you’re about to learn. You’re about to learn this because you want to prove me wrong about my eye poking theory. You will be wrong. Then you will be wearing an eye patch.

Say you’re writing a C++ class. (Yes, say it… out loud for people to hear you. When they give strange glares simply explain that you are reading my whacky blog.) Now say you want to follow best practices and mark your class members as const to make them immutable because you’re all about that functional “no-side-effects” goodness. Now say you did this sub-consciously because the functional side of your mind likes to spontaneously assume control whenever you code, even when you are learning something simple. (By the way, are you still saying all of these things out loud? You can stop now because I’d like you to make it through the text before the authorities are called.) So one side of your mind has done this seemingly cool thing while the other side wasn’t thinking or looking. Now you’re in the middle of one of the class’ methods and attempting to do something non-trivial.

Because this is C++ and you’re not as familiar you try “the simplest thing that works”™. You’re having a bit of trouble because you want to now establish some sort of state in your class while the logic in the method rolls through a series of loops/steps to accomplish its task. You make a local variable of type std::string and everything seems to sorta compile. You now cut/paste the variable as a new member variable in the class header to hold the state. You are building a string using the variable as a buffer. All of a sudden the compiler complains with an error saying, “You are not dope enough to invoke the insert method that I was perfectly happy with moments ago. I hate you!” You try all different variations of the overloaded insert method with no success. Days roll by as you scratch a new bald spot into your scalp. (Bear in mind that normal people do not scratch their head that hard or that frequently but this error has you perplexed!) You eventually decide to read the error message text closer and realize that your compiler is actually complaining about a signature mismatch on the parameters! It is pointing to the first parameter referred to as “this” and crying because the “this” pointer is marked constant!

Looking in the header you doubly/triply confirm that your string variable is NOT marked “const” because that’s the only thing your brain can pick out of the error message. “This string is not const”, you scream at the display while re-reading the error text repeatedly. By this point office workers begin raising concerns. Co-workers refuse to invite you to lunch. Your 5 o’clock shadow reveals the amount of sleep you should have been getting nights prior. Then it hits you!

The first parameter of a member function in C++ is the object itself! The error is referring to the actual instance of object owning the method and somehow IT is marked const! How could this be??? At this point, (shortly after removing the fork from your eyeball) you see the identifier “const” prominently declared in the member function’s declaration! Because the member function is “const” the “this” reference is const! Because “this” is const it cannot mutate any member variables! Mystery solved and you are now off to Lens Crafters to schedule an optometrist appointment!

I have one eye left but I am going to make it through this C++ course I’m taking if it blinds me! (Yes, I’ve programmed in Braille before.) Did I mention how much I love writing in C++?

Sourcemaps missing in Chrome Dev tools


I’m not sure why debugging is so hard on Tizen TVs but this morning I had a mini break-through! Hi, I’m Cliff. you’re here because you want to debug the JavaScript running on your 2016+ model Samsung TV. I’m here because I think I just figured out how to do it.

Today’s tip is a quick one. It involves sourcemaps and webpack and Samsung TVs. Here’s the deal. You’re writing an app for your TV. You’ve created a fancy webpack config to bundle/package it up. You click build and go, or run or whatever to see it pop up on the TV. Then you jump into the source code only to find out it’s all minified! Debugging Javascript is hard enough, but debugging it when all of the code is on one line can be insurmountable. The trick is to use the “eval-source-map” devtool option in your webpack config.

When I initially posted this article I was misinformed thinking you needed to use BOTH the devtool option AND the SourceMapDevToolPlugin. It turns out you only need to use any one of the “eval” variant values in the devtool option. (I had ANOTHER issue in my project where I was not cleaning my output between launches so the TV was NOT seeing the latest changes each time I modified the devtool option, which lead to more confusion.)

Here’s an example of my config which I just got working:

const webpack = require('webpack');
const CopyWebpackPlugin = require('copy-webpack-plugin');

module.exports = {
    devtool: "eval-source-map",
    plugins: [
        new CopyWebpackPlugin([
                '.tproject',
                'config.xml',
                {from: 'src/*', ignore: [ '*.js' ], flatten: true}
            ])
    ]
};

Carl, World’s Best Manager


What does it mean to be a manager? When you think of your direct superior at work, how do you see her/him? Do they smile often? Give you family time off from work? Is your manager cool because she gifted everyone on the team with brand new Apple wireless keyboard and mouse combos? What does it really mean to have a cool manager? What does it mean to be that cool manager? Hi, I’m Cliff. You’re here because you either hate or love your manager and you wanna find out how to be or acquire a cool manager. I’m sorta here because… I dunno… I think I’m being called to write about this guy I worked under.

I don’t know why I’ve been thinking more and more about one particular manager I worked under. Don’t get me wrong, I’ve worked with a bunch of super-hero supervisors even and especially in my recent years! There are a few folks I would give my life for… I’m talking about the last couple of guys I worked under. But let’s not play favorites. For all the awesome managers and supervisors I worked with I’ve also had may fair share of tyrants. Today’s topic is not to compare apples but to highlight one particular individual who guided me through one of the biggest accomplishments in my professional career. I’ll try not to be be specific, but this guy knows who he is, what he did, and the accomplishments I’m referring to.

Over the years, I’ve worked on a ton of high profile apps and teams at the world’s top tech companies, but there was one project very early in my tech career where I had the rare opportunity to be the lead engineer. It was a first for me and of course it went to my head. That’s where Carl comes in. He was my tech manager during this time in this is also what makes Carl so special.

Before I get to the details, let me explain a bit about Carl. He was a younger guy but extremely chill. The kind that jokes during meetings but has a really laid back demeanor where it never seems like anything bothers him. My first encounter with him as my manager was during a remote call between offices. My office was in PA and he was working out of Denver, Colorado. We had these huge Polygon video conference setups in the conference rooms with automated cameras that would rotate towards you when powered up. Each room had a remote that could control both the local camera and the camera in the office you dialed into. I believe these were HD display as well at a time where HD was extremely rare.) I was commissioned to lead this major new product which was going to be a first on the world’s biggest tech platform.

I dialed into the Colorado office to speak with my new tech manager about it. The camera in Denver spins around and I see two feet… prominently displayed… from a chair… around about where a person’s head would normally extend! It was a nicely clad pair of feet decorated with some fly sneakers, but it was still feet… where a head was expected. This was Carl. Carl’s body was attached to the feet. Somewhere towards the bottom of the room where I was able to remotely point the camera was his smiling face, attached to the body that was screwed into the feet initially displayed. (He was lying on the ground with his feet perched in the chair all chill like.) Now most people know how big a clown I am but when I saw this I was like, “This dude here is made out of my kind of material!”

We spoke about the project and Carl reminded me that I was running point. He told me he would be participating as a developer and, in fact, working under me as a team member. It was brand new sexy technology and Carl was also an awesome developer. This was around the year 2008 and it was a big deal. Everybody and they momma wanted to touch this brand new upcoming tech. Now as awesome as it was to see those fly sneakers when I initially dialed Denver I was twice as blown away when my new manager (I had just been assigned to Carl’s team after a re-org but before starting the project) told me he was excited to take direction from me.

Now here’s a few points of observation I want to share about the moment.

  • First, the best way to hold a remote video meeting with your subordinates is through your sneakers.
  • Never let ’em see you smile… at least until you let ’em see your Nikes. (Or were they Adidas?)
  • Great leaders humble themselves and empower their team members.

That one conference call is forever burned into my mind! We could end the story right there as this is powerful enough on its own but there’s more. You see, the tech I was working with was unfamiliar. It used a programming language I’d never even heard of before and required specialized equipment I had just recently received. I was a very capable programmer but I was out of my element. Also, I had recently become obsessed with this new development methodology called Test Driven Design, and the majority of people I worked with had not been bitten the same way. On paper, and in my mind, this would have been a storybook example of one man leads a team to a successful product launch using an industry leading approach and they all lived technically superior ever-after! However, reality was drastically different.

I had an ego problem at the time. There was very little you could tell me about programming that I didn’t think I already knew. I had a task and had learned an approach that would guarantee success. The worse thing you could do to a person like that would be to put them in a position of leadership. Carl was completely opposite. He was open to new ideas and approaches that were different than his own. He didn’t know where I was going with our new team but he was like, “I’m with it! Let’s do this!”

About a month into the project and I was in trouble. We were having weekly, probably daily meetings on how to do TDD and I was giving tasks for people to work on. Hardly any source code had been developed for the project. My ego was too big to acknowledge any trouble and when asked about progress I would always respond with, “we’re on track!” Carl, a heavily experienced developer and manager knew better and tried to guide me from behind, where my ego wouldn’t get bruised. He always spoke to me in private and never chastised me. Looking back, I deserved so much worse than chastisement! I was making so many mistakes, not so much with my design approach but with the practicality of everything. Add to that how I really didn’t understand the tools or the platform. Most of what I was trying to do was technically not possible but my ego convinced me otherwise.

Eventually another tech director was brought on the team, along with some folks who had actually developed with the technology for the platform. I was still officially considered the lead but mostly took a back seat to these more experienced folks. Carl was in my corner the entire time encouraging me as the project began to become real. We had an initial and successful launch and the entire team got their deserved credit.

At this point my confidence was bruised because my initial code and approach was tossed in order to have a successful launch. Deep inside, I realized the insanity of what I tried to do and I was in failure mode. A few months later we began phase 2 of the project which was 3 times as significant. It required twice the staff and a level of leadership/expertise of presidential magnitude. I had been reassigned from the project after another re-org, but then something strange happened only days after my being reassigned. I had a little technical breakthrough with some tech that no one else in the company had ever touched before. I was immediately reassigned back to the project and Carl was still taking lead from me!

During this next phase I was in a lead position but actually being lead by Carl. He was experienced in managing big projects and I was experienced in making big messes. By this time, my ego was bruised enough where I was actively listening to him rather than barking toward him, although I wouldn’t admit it. You see, he had this way of reminding me to do things I’d forgotten that I didn’t know anything about. It would go something like this:

Him: “Remember we have to order servers.”
Me: “Of course! I’m already on it! Wait… how do I do that?”
Him: “Remember, you have to put in an ops ticket, tell them how many, what software needs to be on them, the business need, etc.?”
Me: “Oh yeah! Right, I was just about to do that! Wait how many should we purchase?”
Him: “You have to research how many requests/sec each server can support on average and forecast how many requests/sec we should expect… etc.”
Me: “Why do I gotta do that again?”
Him: “So you can predict how many we need to support a load! You are going to load test, right?”
Me: “Yeah, yeah… I was just starting that…”

The whole time this guy was extremely humble and supporting me the whole way even as I was clearly taking all the credit and acting like the man. To be fair, I was doing some major coding on the project. I was doing everything from client side media decompression and session management, network programming, to server side media compression, Java, C/C++, JNI, callbacks, timing, build systems, deployment, provisioning servers, and washing laundry. I was in my happy place, doing what I loved to do on a truly greenfield project. Still the majority of the credit, leadership, management, and straight up awesomeness goes to a certain individual.

I’ve told this story several times to many different people but I never really identified Carl as the man behind the miracle. I never focused on his role as tech manager, leader, role model, etc. Over the years I’ve taken his example and tried to be a role model for people I work with whenever I am placed in a leadership role. I always make a point to give my coworkers all the credit regardless of how much code I contribute and irrespective of what role I play. I always highlight the contributions of others above my own. Ultimately I’ve learned leadership through serving those I lead or mentor. Shout out to Carl for teaching me this very valuable lesson and for being the first experience I’ve ever had with a manager who leads from behind.

React/Relay Auth0 and GraphQL lesson


I’ve been struggling through a React full stack course on Lynda.com all week. For those of you who are unfamiliar, React is a programming framework created by Facebook and “full stack” means programming everything from the user interface (those cool animated buttons on the screen) to database which typically lives in the cloud. How does a “database” live in a cloud you ask? What is a database??? Hi, I’m Cliff. You’re here because you have these questions and more. I’m here because… well, because my daughter just reminded me that I haven’t posted anything in a month of Sundays. It’s good to be back posting if not only sporadically. I’ve learned and accomplished so very little since my last post so I guess I should probably make this a more regular thing. With that, let’s get started on what we’re s’posed to learn today.

Like I said above, I’m on this React Full Stack course that I found on Lynda.com. It assumes intermediate experience but in reality you would have to be a 20 year professional to finish the course. This is no fault of the instructor, rather it is because of the speed of these popular technologies used in the course. He introduces React and Relay using a few open source and freemium APIs and services which, by the time of my taking the course, have grown incompatible with one another. The instructor attempts to introduce React, Relay, and authentication and integrate these concepts in a rather interesting TicTacToe game that you play against either another player or an artificial intelligence. At the end of each game you are supposed to guess wether you played against a human or computer opponent.

The two major sources of pain are Auth0 and GraphQL and it’s publicly available Graph.cool service. Auth0 is a freemium authentication service whose sole purpose is to authenticate any user of your app. GraphQL appears to be a newer object-like or NoSQL data store which is supposed to be a glove fit for Relay. (Relay is an implementation of Facebook’s Flux design pattern which itself is just a way of controlling data flow.) Just explaining the titles of these major components requires a bit of programming history and expertise! The problem is that Auth0, as explained in the course, does not work with GraphQL. Fixing the incompatibility took major experimentation and research as the newer way to integrate these two is completely undocumented. I found a few random hints through sporadic cries for help on various forums. Everything seems to point to Apollo authentication and I don’t even pretend to know what that is yet. I just want to finish this course!

Here’s what I discovered. Setting up Auth0 and the auth0 lock API out of the box will NOT work against an out of the box Graph.cool deployment. The problem is that Auth0 uses something called RS256 authentication keys and something else called OIDC. Graph.cool rejects these authentication tokens and the ONLY way to fix it is to disable an OIDC setting hidden in auth0.com’s advanced settings and to switch your encryption algorithm to HS256 in these same advanced settings. Once you do that your app will break again because you have to find and use yet another undocumented code setting on the auth0 lock API. When you instantiate your lock instance you have to set:

           this.lock = new Auth0Lock(clientId, authDomain, {
            auth: {
                params: {
                    scope: 'openid email'
                },
                responseType: "token id_token",
            },
        });

The key piece here is the responseType has to be set to “token id_token” which is completely undocumented as far as I can tell.

The whole experience is a nightmare. Add to that how the most confusing part of this exercise is not explained in enough detail through the course. The instructor merely talks through each line of code as he types not really giving enough clarity on exactly how everything pieces together. There’s a lot happening under the covers and it’s all changing rapidly!

Also, many of the open source frameworks used through the course require version tweaking and massaging to work as described. For eg. all of the react-relay imports should be changed to

import Relay from 'react-relay/classic'

because there have been major breaking changes in the Relay APIs which breaks the entire app. I’m not doing enough justice explaining how to fix the many problems in the course nor have properly addressed the many inconsistencies I tripped over. I don’t want to say the instructor did a bad job because he covers a TON of information and functionality in such a short course. Also the idea and implementation is really well put together. However, the course itself is a nightmare and I can’t see how anyone other than the most astute could complete it with all the recent API and service changes. I’m finally almost done with the course having addressed the most difficult piece of integration! I’ll try to keep posting as I make progress!

You have to focus!


You’re watching TV. You see an onscreen button that looks shiny and clickable. You mash the buttons on your remote in an attempt to select the button and click it but nothing happens! You try various random button combinations with one particular button triggering a purchase from the app store and charging your credit card. You throw the remote down in disgust, leave the house and go shopping. Hi, I’m Cliff. You’re here because you’ve run up debt on your credit cards from frustration related to terrible TV user interfaces which sends you on shopping sprees where you purchase unnecessary paraphernalia. I’m here because I too have massive credit card debt. The word of the day is focus.

I’ve recently begun a new career developing apps for Samsung TVs and I’m having more fun than a fun loving person who is trapped in funplex funded by functionally competent people who are colocated with funny people who are fundamentally incapable of having a bad day. In the middle of this fun, however, I ran into an issue while trying to develop my first TV app. In short, I couldn’t click an onscreen button.

My1stTVApp

I was following the Tizen tutorials/guides on creating your first TV web-app. The basic project template gives you an on-screen clock button that you’re supposed to click to see a running clock. This works out of the box for most people. You open the template, run it on the emulator (by the way, Tizen Studio has these cool TV emulators which remind me so much of Android!), then click the button using the mouse and viola! A clock appears where the button was. The REAL problem comes when you run the project on an actual TV set. You find that there is no mouse on the screen and you only have the TV remote control!

Intuitively you will try to use the directional arrows on the remote where you discover they control nothing! There is no on-screen pointer, no focus elements, and no way to change or set any focus.

This threw me into a documentation scouring frenzy. I read through tons of online docs, downloaded other samples, only to come up empty handed. (Why is it whenever I try a new framework I get hung up on the simplest functionality?) At any rate, after some head scratching, I learned that I could use the Chrome dev tools debugger and from there I was able to interactively poke at the on screen elements using JavaScript. I located the clock button using the getElementById() function. I played around with calling focus, and I still didn’t see any on screen changes. (usually when an item is in focus you would expect to see a little focus ring but I saw nothing visually that indicated any sort of focus.) Not to be defeated, I tried clicking the ok button on the remote and magically the click event was triggered! The final solution is actually very elementary! (Source code below.)

var checkTime;
var clockButton;

//Initialize function
var init = function () {
    // TODO:: Do your initialization job
    console.log('init() called');
    clockButton = document.getElementById("divbutton1").getElementsByTagName("button")[0];
    
    document.addEventListener('visibilitychange', function() {
        if(document.hidden){
            // Something you want to do when hide or exit.
        } else {
            // Something you want to do when resume.
        }
    });
 
    // add eventListener for keydown
    document.addEventListener('keydown', function(e) {
	    	switch(e.keyCode){
	    	case 37: //LEFT arrow
	    	case 38: //UP arrow
	    	case 39: //RIGHT arrow
	    	case 40: //DOWN arrow
	    		clockButton.focus();
	    		break;
	    	case 13: //OK button
	    		break;
	    	case 10009: //RETURN button
			tizen.application.getCurrentApplication().exit();
	    		break;
	    	default:
	    		console.log('Unhandled key. Key code : ' + e.keyCode);
	    		break;
	    	}
		console.log('Key code : ' + e.keyCode);

    });
};
// window.onload can work without 
window.onload = init;

function startTime() {
    var today = new Date();
    var h = today.getHours();
    var m = today.getMinutes();
    var s = today.getSeconds();
    m = checkTime(m);
    s = checkTime(s);
    document.getElementById('runningclock').innerHTML='Current time: ' + h + ':' + m + ':' + s;
    setTimeout(startTime, 10);
}

function checkTime(i) {
    if (i < 10) {
        i='0' + i;
    }
    return i;
}

Long story lengthened, I was able to overcome my first baby hurdle of using the TV remote with my web app by merely asking the button in question to become focused. I was then able to click the button and trigger the clock using the remote. I wish there was more excitement to my story than a missing focus ring but I'm just getting started! In the meantime I'll try to remain focused! (Pun is partially intended for cheesiness.)

Hello World The Unit Test Strikes Back


I was looking for something unique to post for my 11 year blog anniversary when it hit me. I’ve already missed the date! I started blogging way back on May 17th 2006 and every year since then I’ve wanted to post something special on the +1 year marker to celebrate. Every year I’ve forgotten or straight up missed the date. Last year would have been 10 years and I was so determined that I posted on August 9th so this year I’m following suit and pretending that August 9th is the actual date. Don’t say nothing, just nod your head and play along.

So what has 11 years of posting software development related articles and topics lead to? I’ve shared tips on how to be terrible in this field. I’ve demonstrated how to test sockets based apps on an Android emulator. There was a tutorial on how to code TDD style. I’ve also shared some of my many failures as a parent.

It’s still fun to play with stuff like networking and streaming audio and I want to do more posts on debugging tips that don’t suck. I honestly believe that being successful at software engineering is not just about the code you write, but about the code you wade through when resolving a problem. Since it is likely that you spend most of your time reading and modifying existing code and much less of your time cranking out new code you may want to brush up on pointers for clawing your way out of a rat hole.

I appreciate you checking out my site and if you have suggestions on what you want to learn more about drop a line in the comment section. I look forward to many more years of posting foolishness and discovering new ideas and programming patterns.

Networking, sockets, and UDP Streaming fun! Part III


I haven’t posted an update in a while and I looked back then realized that I almost forgot about this series I started. Hi, I’m Cliff. I like to post topics of interest, start a random series and abandon it part way in. If you’ve been here before then you probably already knew that. Today We’re going to take a deep look at the UDP component of my streaming audio experiment. Actually, it never was my experiment, instead I borrowed it off of some forum I found online but that’s irrelevant. In my last post I explained how audio is captured on the server at a high level and covered the basics of how audio works. I also hinted at two key methods in the program, sendThruUDP() and receiveThruUDP(). These two methods send audio bytes over the network from the server and receive them on the client respectively.

Rapid audio packets
Going back to my last post I highlighted the following block of code:

byte tempBuffer[] = new byte[1000] ;
int cnt = 0 ;
while( true )
{
   targetDataLine.read( tempBuffer , 0 , tempBuffer.length );
   sendThruUDP( tempBuffer ) ;
}

This is what we, in silicon valley, call a tight loop. It is a while statement based on a a literal boolean value which will never be false, meaning the loop will continue indefinitely. The reason it is tight is because using a literal removed the need for an additional conditional step, which would slow down the iteration. I hinted at the importance of speed when I illustrated this loop. When streaming audio and/or video data real time you want to do everything possible to reduce overhead in sending the data over the network. With this in mind, let’s look inside sendThruUDP();

    public static void sendThruUDP( byte soundpacket[] )
    {
       try
       {
       DatagramSocket sock = new DatagramSocket() ; 
       sock.send( new DatagramPacket( soundpacket , soundpacket.length , InetAddress.getByName( IP_TO_STREAM_TO ) , PORT_TO_STREAM_TO ) ) ; 
       sock.close() ;
       }
       catch( Exception e )
       {
       e.printStackTrace() ;
       System.out.println(" Unable to send soundpacket using UDP " ) ;   
       }
 
    }

There’s a lot happening inside this method even though there are very few lines of code visible. Here we see code which starts by creating a DatagramSocket object. It then creates a DatagramPacket object and stuffs the packet object full of sound packet bytes using the constructor parameters. We also pass the length of the packet byte array along with the IP address and port of the client that we are streaming to. On the same line that we create the DatagramPacket we call send on the DatagramSocket instance, passing this newly created DatagramPacket object. The send() method will take this Packet, which contains the raw audio data, and send it to the IP address and port info that is recorded inside the DatagramPacket. We end the method by closing the datagram socket then continue with the loop that originally called it.

The work of object constructor/destruction
Our first series of major problems are right here in this method. Remember what I said above about reducing overhead? Well there is a ton of overhead in this method, much of it is in the form of constructors. An object constructor usually contains the most expensive parts of any program as such you want to call them as infrequently as possible. Java attempts to make programing fun and simple by removing the many details of what your operating system and hardware are doing behind the scenes but in reality it helps to have a cursory understanding of what happens in general. Start with DatagramSocket(). This isn’t just a magical object. (Let’s try to imagine what ultimately needs to take place for sound to fly from one machine to another.) In reality the object has to establish communication bridge between your program and the operating system and eventually with your network card. This work would most likely happen in the constructor. Now consider the DatagramPacket object constructor. It doesn’t have to do as much work, however it does need to set aside (or allocate) a chunk of memory to hold the audio data. You may tend not to ignore it but allocating memory also takes some time. (As a Java programmer you are not supposed to think about memory allocation because it’s done auto-magically for you!) The operating system has to scan the available RAM and sometimes shuffle things a bit to find room for what you want to do. Finally the call to sock.close() adds even more overhead. The close() call destroys all of the bridge work that was established in the constructor.

Visualization
To visualize what is happening, imagine you wanted to carry a bunch of wood from Home Depot to your condo. Pretend you needed a truck and that there was a bridge between Home Depot and where you lived across town. Let’s say the truck could only carry so many blocks of wood and required several trips back and forth between your home and Lowes. (Yes, I started the analogy with Home Depot but work with me, Lowes is easier to type.) The bridge represents the Datagram socket, the truck would be the DatagramPacket, and the repeat trips would be the while loop calling the method. What this method does is build the bridge, then build the truck before driving a single load of wood home. It then places dynamite under the bridge and under the truck completely demolishing them before exiting and returning to Home Depot for the next load. (The sock.close() method is represented by the sticks of dynamite in my analogy.) Hopefully you can imagine how inefficient it is to move all of the wood from Lowes to your apartment. If there were a crew of wood workers at your home they would be annoyed by how long it took for each load of wood to arrive, thus there would be a lag in their productivity. On each trip they would likely takes a coffee break and watch an episode of Judge Judy.

We’ve found one major source of lag in our program but now let’s look at the client logic. Recall how I highlighted the run() method in the client?

public void run()
{
    byte b[] = null ;
    while( true )
    {
       b = receiveThruUDP() ; 
       toSpeaker( b ) ;
    }        
}

This is another tight loop which is intended to be fast. It calls the receiveThruUDP() method on each iteration to receive bytes from the network into a byte array variable then pass them to the speaker.inside the receiveThruUDP() method we have the following:

    public static byte[] receiveThruUDP()
    {
       try
       {
          DatagramSocket sock = new DatagramSocket(PORT_TO_STREAM_TO); 
          byte soundpacket[] = new byte[1000];
          DatagramPacket datagram = new DatagramPacket( soundpacket , soundpacket.length , InetAddress.getByName( IP_TO_STREAM_TO ) , PORT_TO_STREAM_TO );
          sock.receive( datagram ); 
          sock.close();
          return datagram.getData(); // soundpacket
       }
       catch( Exception e )
       {
          System.out.println(" Unable to send soundpacket using UDP " );   
          return null;
       } 
 
    }

This method begins by creating a DatagramSocket. It then creates a byte array of 1000 bytes, and then goes on to create a DatagramPacket where it passes the empty sound packet byte array, it’s length, and the IP and port we are streaming to. Next it calls receive on the socket passing the empty datagram packet. The receive method will will the packet with the sound data that was sent from the server. Finally the method ends by calling close on the socket and returning the received data to the calling code. Again, the logic in the method is intended to be fast. However, based on our learnings from above we can probably identify some very similar inefficiencies in this method. Creating a socket establishes a bridge with your operating system and your network card, creating the sound packet array and the Datagram Packet each need to allocate memory, closing the socket destroys all of the communication bridge that was set up in the beginning, then the entire process is repeated on each iteration.

How do we optimize these inefficiencies? The simplest thing would be to remove all calls to object construction from the inefficient methods. You also don’t want to call close in either the send or receive method. Instead you want to create the objects when the program starts and reuse these objects inside the send and receive methods. There are likely other inefficiencies in the program but these are, by far, the most critical. Like I said earlier, I was amazed the program ran and produced any audio at all as the logic is terribly inefficient. The fun part of the project was working incrementally through both the client and the server while running the app and hearing the incremental improvements real time. In other words, I was running both the client and the server and listening from the client. I would then make little small optimizations on either the client or server, recompile and re-run. The compile and run step happens quickly causing only a brief interruption in the audio. The result felt like I was massaging the sound into the correct shape… truly amazing!

I’m going to continue on this path of discovering the capabilities and possibilities of network streaming. Stay tuned for additional updates.