Git Reverting!

I used to think revert would get you back to the state when a given commit was submitted. It does not. Instead it simply undo’s the given commit. To do the former, you’re better off using reset instead.

git reset --hard HEAD~5

Will take you back to the state of the repo 5 commits ago. If you have uncommited work you want left along then use --soft instead of --hard. git revert undo’s the commit who’s hash was given, and makes new commit with the changes that undo that given commit.

If it’s a merged commit then you have to use the -m option (for merge) along with either a 1 or a 2. 1, meaning you want to take it back to the state of the remote parent. 2 meaning you want to take it back to the state of your local parent. (I usually use 1. I trust remote more than myself)

If anything goes wrong when you try to revert use git cherry-pick --abort and try again.

Sometimes you may have to merge. If anything goes wrong with the merge you can use git merge --abort and start over.

So let’s go through this with a realistic (but hopefully not common) scenario. Let’s say some bad code went live. You’re on your site and notice something terribly wrong. Don’t fret! Let’s say you did a recent push and you think you know which commit has the problem, but don’t want to take the time to debug it while your live app is in a bad state. Here’s what you do (assuming you are up-to-date on your master branch).

$ get checkout -b revert-branch
$ get revert <bad commit hash>
$ git push origin revert-branch

Above we are checking out a new branch, undoing the bad commit, and pushing the branch up. You can find the hash of the bad commit on Github or with git log. git revert will put you in your default text editor with the commit message “revert “, so all you have to do is save and quit (:wq if your editor is vi or vim) and you’re ready to push up. Then pull the pr in and update the site and you’re in the clear!

This has been a quick-n-dirty guide on reverting in git.


Getting started with tmuxinator

Tmuxinator is a way to easily manage tmux sessions. This guide will assume you have tmux and ruby gems installed.

Start by installing tmuxinator.

gem install tmuxinator

Tmuxinator will use you default editor so make sure it’s set.

export EDITOR=vim

Find where the tmuxinator executable was saved.

find . -name tmuxinator 2>&1 | grep -v "Permission denied"

The above command finds any file or directory named “tmuxinator” and pipes the output to a grep that leaves out any results that contain “Permission denied”

When you find the path to the tmuxinator command then add it to your $PATH. I keep my path in .bash_profile, but you can update your path with the following command.

export PATH=$PATH:path/to/tmuxinator

If you do have a ~/.bash_profile or ~/.bashrc file that you want to use then you can add the above line to it. You can also set an alias for tmuxinator there since it’s kind of long.

alias mux='tmuxinator'

add the above line to your .bash_profile and run source ~/.bash_profile to refresh your environment. (This didn’t quite work for me the first time. You may have to restart your terminal)
Now let’s make a new tmux session (I know we just made an alias, but for clarity’s sake, I’m going to use the full command name)

tmuxinator new foo

If you set your default editor above it should open up a yml file in your default text editor. Have a look around the file. We’ll keep it the way it is for now. Save and quite when you are done checking it out. Now in the terminal you can run this session with the following command.

tmuxinator foo

It’s as simple as that! You can use bind s to switch between sessions. bind is the key combination that you will use to do basically all tmux commands. The default is ctrl-b I believe. So ctrl-b s will let you switch between session. foo may be your only session unless you opened up a default session before you opened foo. If foo is your only session then this won’t be meaningful, but imagine you have a whole list of sessions. To be able to quickly choose another session you may want to use j and k instead of the arrows keys (vim style). To do that you may have to add the following line to ~/.tmux.conf.

set-window-option -g mode-keys vi

Restart tmux to make it go into effect. Command-t exits tmux (I found that out on accident), but if that doesn’t work you may have to kill your tmux processes and then start tmux again. See my post about killing processes here.

Being able to use vi commands will also be helpful when you use bind [. This command will allow you to traverse output. Let’s say you are compiling your code or running a server and it errors, but the pane is too small to show all of it. bind [ will allow you to move up in the window to see the above output. If you have your mode-keys set to vi then you can use j, k, h, and l to move around the output.

You can kill your session with command

tmux kill-session -t foo

You can use command-line arguments in your yml session files.


name: foo
root: ~/


tmuxinator foo bar will open up a session and place you in the ~/bar directory.

There are a few handy things you may want to add to your ~/.tmux.conf

bind h select-pane -L
bind j select-pane -D
bind k select-pane -U
bind l select-pane -R

bind J resize-pane -D 5
bind K resize-pane -U 5
bind H resize-pane -L 5
bind L resize-pane -R 5

In tmux you can do commands that change your environment with bind :
select-pane -D and resize-pane -D 5 (can also just be resize-p) are the type of commands you can do in your window with bind :
the above lines make those commands possible with quick hot keys so now instead of bind :resize-pane -D 5 to resize your pane you can simple do bind J. bind J will take you to the pane that left of the current pane (-L for left), and the following lines work the same way.

this is only the tip of the iceberg with tmux. You can find out more about tmuxinator and see other config files with the links below.

Testing Private Functions

I’m a big fan of test driven development, but until now I’ve never had a good way of testing private functions. For the purposes of this article, private functions are functions that are not exported.


const id = x => x
const add = (x, y) => x + y

module.exports = add

the id function is private since it’s not exported. The add function is public since it’s exported. If we were to test our module it would look something like this.

const expect = require ('chai').expect
const addService = require ('./addService.js')

describe ('add service', function () {
  it('should add', function (done) {
    const func = addService.add
    expect(func(1, 1)).to.equal(2)

notice the const func = addService.add line. It’s possible to grab the add function because it’s exported. If we tried the same thing with the id function then it would be undefined. We could just export the id function also, but that’s not good practice. You don’t want to expose more than you have to. Luckily we don’t have to with Rewire. Rewire works exactly like require accept that it gets the private functions also. By using rewire we can test all our functions. Our tests would look something like this.

const expect = require ('chai').expect
const rewire = require ('rewire')
const addService = rewire ('./addService.js')

describe ('add service', function () {
  it('should add', function (done) {
    const func = addService.add
    expect(func(1, 1)).to.equal(2)

  it('should return identity with id function', function (done) {
    const func = addService.__get__('id')

Notice instead of require ('./addService.js') We have rewire ('./addService.js'). We just substituted “rewire” for “require”. Rewire has a special getter function that allows it to get private variables. __get__ takes in a String, which is the name of the function or variable you want to get. So in our other test we can get the private function and test it like we would any exported function.

Please forgive the useless example here where we have the identity function that doesn’t do anything and it’s even used.

Fyi, this was not covered, but you will need mocha to run these tests.

npm install -g mocha
mocha nameoftestfile.js

This has been a quick-n-dirty guide to testing private functions from a module. You can see other solutions for this problem here and shout out to barwin for his answer, in which this blog was inspired by.

Killing a process

I was asked about killing a process in an interview and my answer was close, but not quite correct. I realized it wasn’t correct when I found myself actually having to do it just days later. Let’s say you have a node app running that you want to kill.

ps aux | grep "node"

ps aux will list all processes. We pipe it into grep to find our node process.

kill pid

The kill command kills the process. pid is a placeholder for the actual process id that we get from ps aux

This has been a quick-n-dirty guide to finding and killing a process.

UPDATE: I found an even easier way to kill processes. The pidof command will find the pid of a process. You may have to brew install pidof or apt-get install pidof if you don’t have it, but once you do then running the following command will show you the pid of your node app.

pidof node

This may give you multiple pid’s if multiple node apps are running. You can kill them all with one simple command.

kill $(pidof node)

UPDATE: Yet another very easy way to kill a process

$ pgrep node
$ kill 93498

ES6 Destructuring

If you’ve ever dealt with functions that have a ton of arguments then you will appreciate this. Consider the following function

function foo (user, company, invoice, status, bar, stuff, things){ 
      //do some stuff with all thing things

According to Code Complete a function should never have more than 7 arguments. Here we have exactly 7, but it still seems like too many. Perhaps an args object would help.

 function foo (argsObject) {
      //do some stuff with all the things

That might look great, but in practice it’s usually like this:

 function foo (argsObject) {
      var user = argsObject.user;
      var company =;
      var invoice = argsObject.invoice;
      var status = argsObject.status;
      var bar =;
      var stuff = argsObject.stuff;
      var things = argsObject.things;

      //do some stuff with all the things

Sheesh. We might as well go back to having a billion arguments. This is one instance where deconstruction can make our code a little cleaner. With ES6 destructuring we can do this

function foo (argsObject) {
  var {user, company, invoice, status, bar, stuff, things} 
    = argsObject

  //do some stuff with all the things

In the above code, your //do all the things section can look exactly like the first code snippet because you’ll be able to use all the variables with their normal names thanks to the destructuring.

You can test this concept out very easily.

var obj = { foo: 'bar' };
var { foo } = obj;


The output of this should be bar. If it’s not then you may be running an old version of JavaScript.

The same can be done the other way around. Instead of taking an object and getting vars out of it, we can take vars and get an object.

var foo = "foo";
function bar () { console.log("bar"); }

var obj = { foo, bar } //{ foo: "foo", bar: [Function: bar] }

This has been a quick-n-dirty guide to destructuring with ES6. You can find out what else you can do with destructuring here

SSH Fingerprints

Sometimes you may need to get a fingerprint for authentication purposes. This is a quick-n-dirty guide on how to do that.

Run ssh-keygen -lf with the path to your key to get the fingerprint

ssh-keygen -lf ~/.ssh/

The l options stand for “list” and f for “filename”.

However, you may get it back in a format you don’t recognize (SHA perhaps). If you want it to look something like this 00:11:22:33:44:55:6... then you’ll need to add an another argument ( -E mdt).

ssh-keygen -E md5 -lf ~/.ssh/id_rsa_pub

That will give you something like md5 00:11:22:33:44:55:6... with maybe some stuff before and after. Chances are you’ll just need the numbers seperated by colons so just copy the part you need and paste it where you need it.

This has been a quick-n-dirty guide to retrieving ssh fingerprints.

SASS Mixins and Includes

sUsing @mixin and @include can make your stylesheets more DRY.

Here’s an example of a mixin.

@mixin large-text {
  font: {
    family: Arial;
    size: 20px;
    weight: bold;

Here’s how it would possible be used.

.page-title {
  @include large-text;
  padding: 4px;
  margin-top: 10px;

The above compiles to this:

.page-title {
  font-family: Arial;
  font-size: 20px;
  font-weight: bold;
  padding: 4px;
  margin-top: 10px; 

This has been a quick-n-dirty guide to mixins and includes in SCSS. For a more robust article on these concepts go to the source of the above code snippets here

Turbo-charge your Docker Workflow

My last blog post was about how to use docker in your everyday workflow. This post will be how to make it quick and easy. If you are making a lot of changes to your website it is a pain to have to build and run docker images every time you tweak some css. Here’s how I addressed that problem.

First let’s review. Given that we want our image to be named “devi” and our container to be named “devc”. We’d have to run all these commands to test our changes with docker:

docker stop devc
docker rm devc
docker rmi devi
docker build -t devi .
docker run --name devc -p 8080:3000 -d devi

And that’s gets old real quick when you are doing a bunch of tinkering to your site. So let’s take all these commands and put them in a bash script. Let’s name it dbr for “docker build run”.

vim dbr

Add the above commands to the file. Save and quit and then make it executable.

chmod 744 dbr

Now add it to your path. Check your path with echo $PATH and move it to one of the locations included in your path. I moved mine to /usr/local/bin.

mv dbr /usr/local/bin

Once you’ve moved it to a directory that in your path you can run the command from anywhere. But their’s a caveat. Even though you can run the command from anywhere doesn’t mean it will actually be successful from anywhere. You’ll need to run it from the same directory as your Dockerfile is in for it to build the image correctly. So add a line at the beginning of the script to move to the correct directory. You’ll want to use an absolute path. After you do that, you can truly run the command from anywhere in your folder structure and it will delete your old image and build a new one and run a new Docker container with that image.

This has been a quick-n-dirty guide on speeding up development. Docker is the context of this, but the concept of making shell scripts to save time can be applied to all kinds of tasks.

Adding Docker to your Workflow

If you are like me, you may be thinking “Docker’s great, but how exactly do I work it into my workflow?” I quick-n-dirty guide will answer that question. But as always, success is not guaranteed.

After you’ve fixed a bug or added a feature to your application, you’ll want to build a new image with the changes.

docker build -t repo:version2 .

The t option allows you to name the image. “repo” would be the name of your repository or image and “version2” would be the name of your tag. The “.” is easy to miss, but it’s very important. It’s the path. The only argument required by docker’s build command. In this case, we are in the directory we want to build from so we simply use the period.

Now we can run the docker image to test our changes, but first don’t forget to stop your currently running docker container if you have one running on the same port. Or you could run the new one on a new port, but if you want to use the same port run docker stop containerId , containerId being the actual id of the contianer. You can run docker ps to find the container’s id, but you only have to use the first 3 characters. Now you can run your new image.

docker run -p 8080:3000 -d repo:version2

this command assumes your app usually runs on port 3000, but here we are mapping it to port 8080. So on your local machine you can reach your app at localhost:8080 even tho it’s still running on port 3000 in the docker container.
The d option is detached mode so it doesn’t take over your terminal.

Now you can check out your changes and make sure everything is working the way it’s supposed to at http://localhost:8080. If your app isn’t behaving as expected and you want to investigate you can see the logs from your docker container with docker logs containerId with “containerId” being the actual id for the container, or at least the first 3 characters of it. This way you can see any errors that may have been logged.

After you test your changes then do your usual git commands.

git add .
git commit -m "did stuff"
git push origin workingBranch

Then you can push to docker

docker push repo:version2

My next post will be how to make this very easy and quick.

Dockerizing a Node App

So you have a node app and you want to run it in a docker container. This is a very quick and dirty guide that will show you how. Success is not guaranteed, but hopefully it can give you a rough idea of what you need to do.


  1. Node app
  2. Docker installed

Given that you have a Node app and Docker installed, making your app run in a docker container is very simple. First you will need a Dockerfile. This file will give instructions on setting up the environment. cd into the directory of your node app and create the Dockerfile.

vim Dockerfile

Your Dockerfile should look something like this:

FROM node
COPY . /app
RUN npm install
ENTRYPOINT ["npm", "start"]

FROM node means your base container will be a node container. Basically a container that already has nodejs installed and configured.

The next two lines copy all of the contents in the current folder on your machine to an app folder and sets the working directory to the app folder where all the proceeding commands will be run.

RUN precedes any commands. so RUN npm install will install all your dependencies for your app.

The last line sets the entry point. This will be the command that starts the app, in this case, npm start.

save the file and quit.

Now that we have the docker file we can build our image with

docker build -t our-sweet-image .

If you have a docker account it may be more like

docker build -t username/our-sweet-image .

The t option is what allows us to name the image. You can run docker images to verify that the image was created.

Now that we have the image we can run it with

docker run username/our-sweet-image.

But that will run directly in your terminal and you’ll have to open a new terminal window if you want to keep using your command line so let’s add the -d options so that it runs in detached mode.

docker run -d username/our-sweet-image

But chances are, your node app runs on a specific port. So if you need to set ports then run the image with this command

docker run -p 8080:3000 -d username/our-sweet-image

This assumes your node app is set up to run on port 3000, but now you can reach your node app at http://localhost:8080 (or So to recap. 8080 is the port that runs your app locally. But the docker container still runs it on 3000. So if you went into your container (which you can do with docker exec -it bash. You can check your container name with docker ps. You can also use your container id instead of the name I believe.) and ran curl http://localhost:3000 then you would get the html of your node app.

There are still a couple things you may need for your app to run correctly. If your app uses SASS then you may need this RUN npm rebuild node-sass after your npm install command.
If your app runs gulp tasks and/or tests then you will need to add the command that runs those tests/tasks just before the entry point command RUN gulp build && gulp tasks or if you have that in an npm build script you will just need to add RUN npm build

Hope this helps. My next blog will be about how to update images and push them to your docker account.