Intro to Bash Scripting

Knowing Bash is a very useful because it’s available on any Unix system. And when you are dealing with servers you have to use it a lot. Bash scripts can be as simple as a list of commands or be complicated with lots of functions and logic. Anything you can do in a terminal you can do in a Bash script and vice versa. This will be a very basic intro. It will show how to declare and use variables, make and use functions, and use if-else statements. But we’re gonna start with the hello world bash script.

Hello World

You can make a script and run it with one command if you want.

$ echo "echo hello world" > helloworld && sh helloworld
hello world

Above we are putting the output of echo "echo hello world" into a file called helloworld, and then (if that is successful) we run the sh program with our file as an argument. Now this is the laziest way to do it. We should at least call our file so we know it’s a bash script. But better practice would be also including the interpreter at the top and making the script executable. Then we can run it by itself.

$ vim


echo hello world

$ chmod +x
$ ./
hello world

running a script with sh will work regardless of permissions. To run the script directly then you will need to make it executable. (I’ve had weird errors with one way and not the other so if that happens just use whichever way works)


Variable declarations are pretty simple. Type the name of the variable then the equal sign then what the variable will be. When you use the variable you precede it with a dollar sign.

echo $foo

The biggest mistake people make when declaring variables is that they put space around the equal sign. There must be no space.


foo() {
  echo "bar"

The code of above declares a function. Even though it looks like a function you’d see in other languages like javascript it doesn’t behave that way. It has parenthesis, but no parameters will go there, and you don’t call a function with parenthesis either. You can call the function simply by typing the name. This is how it would work with arguments.

function foo () {
  echo $1

foo bar

$ sh

if-else Statements

if [[ 1 == 1 ]]; then
  echo "one equals one!"
  echo "somehow one does not equal one"

And alternate way to write an if-else is:

if [[ 1 == 1 ]]
  echo "one equals one!"
  echo "somehow one does not equal one"

I prefer the former.

You can use only 1 bracket for the if statement instead of 2, but double brackets are newer syntax and preferred. They have worked better in my experience.

Here’s some other things to keep in mind when using if statemenets.

  1. Must have a space after [ and before ]
  2. It is recommended that you put double quotes around variable names in if tests when using only one bracket. That way if the variable doesn’t exist then it will be an empty string. If you don’t have the double quotes then you will get some unexpected behavior if the variable is empyt.
  3. Must use : as a placeholder for empty bodies


Any variable that involves arithmetic must be preceded with let.
Screen Shot 2017-07-20 at 3.37.02 PM
That’s it for now. I’ll do another post on dates in Bash later.

Case Statement


case $var in
  echo foo
bar | blah)
  echo "bar or blah"
  echo dunno

Git Reverting!

I used to think revert would get you back to the state when a given commit was submitted. It does not. Instead it simply undo’s the given commit. To do the former, you’re better off using reset instead.

git reset --hard HEAD~5

Will take you back to the state of the repo 5 commits ago. If you have uncommited work you want left along then use --soft instead of --hard. git revert undo’s the commit who’s hash was given, and makes new commit with the changes that undo that given commit.

If it’s a merged commit then you have to use the -m option (for merge) along with either a 1 or a 2. 1, meaning you want to take it back to the state of the remote parent. 2 meaning you want to take it back to the state of your local parent. (I usually use 1. I trust remote more than myself)

If anything goes wrong when you try to revert use git cherry-pick --abort and try again.

Sometimes you may have to merge. If anything goes wrong with the merge you can use git merge --abort and start over.

So let’s go through this with a realistic (but hopefully not common) scenario. Let’s say some bad code went live. You’re on your site and notice something terribly wrong. Don’t fret! Let’s say you did a recent push and you think you know which commit has the problem, but don’t want to take the time to debug it while your live app is in a bad state. Here’s what you do (assuming you are up-to-date on your master branch).

$ get checkout -b revert-branch
$ get revert <bad commit hash>
$ git push origin revert-branch

Above we are checking out a new branch, undoing the bad commit, and pushing the branch up. You can find the hash of the bad commit on Github or with git log. git revert will put you in your default text editor with the commit message “revert “, so all you have to do is save and quit (:wq if your editor is vi or vim) and you’re ready to push up. Then pull the pr in and update the site and you’re in the clear!

This has been a quick-n-dirty guide on reverting in git.

Getting started with tmuxinator

Tmuxinator is a way to easily manage tmux sessions. This guide will assume you have tmux and ruby gems installed.

Start by installing tmuxinator.

gem install tmuxinator

Tmuxinator will use you default editor so make sure it’s set.

export EDITOR=vim

Find where the tmuxinator executable was saved.

find . -name tmuxinator 2>&1 | grep -v "Permission denied"

The above command finds any file or directory named “tmuxinator” and pipes the output to a grep that leaves out any results that contain “Permission denied”

When you find the path to the tmuxinator command then add it to your $PATH. I keep my path in .bash_profile, but you can update your path with the following command.

export PATH=$PATH:path/to/tmuxinator

If you do have a ~/.bash_profile or ~/.bashrc file that you want to use then you can add the above line to it. You can also set an alias for tmuxinator there since it’s kind of long.

alias mux='tmuxinator'

add the above line to your .bash_profile and run source ~/.bash_profile to refresh your environment. (This didn’t quite work for me the first time. You may have to restart your terminal)
Now let’s make a new tmux session (I know we just made an alias, but for clarity’s sake, I’m going to use the full command name)

tmuxinator new foo

If you set your default editor above it should open up a yml file in your default text editor. Have a look around the file. We’ll keep it the way it is for now. Save and quite when you are done checking it out. Now in the terminal you can run this session with the following command.

tmuxinator foo

It’s as simple as that! You can use bind s to switch between sessions. bind is the key combination that you will use to do basically all tmux commands. The default is ctrl-b I believe. So ctrl-b s will let you switch between session. foo may be your only session unless you opened up a default session before you opened foo. If foo is your only session then this won’t be meaningful, but imagine you have a whole list of sessions. To be able to quickly choose another session you may want to use j and k instead of the arrows keys (vim style). To do that you may have to add the following line to ~/.tmux.conf.

set-window-option -g mode-keys vi

Restart tmux to make it go into effect. Command-t exits tmux (I found that out on accident), but if that doesn’t work you may have to kill your tmux processes and then start tmux again. See my post about killing processes here.

Being able to use vi commands will also be helpful when you use bind [. This command will allow you to traverse output. Let’s say you are compiling your code or running a server and it errors, but the pane is too small to show all of it. bind [ will allow you to move up in the window to see the above output. If you have your mode-keys set to vi then you can use j, k, h, and l to move around the output.

You can kill your session with command

tmux kill-session -t foo

You can use command-line arguments in your yml session files.


name: foo
root: ~/


tmuxinator foo bar will open up a session and place you in the ~/bar directory.

There are a few handy things you may want to add to your ~/.tmux.conf

bind h select-pane -L
bind j select-pane -D
bind k select-pane -U
bind l select-pane -R

bind J resize-pane -D 5
bind K resize-pane -U 5
bind H resize-pane -L 5
bind L resize-pane -R 5

In tmux you can do commands that change your environment with bind :
select-pane -D and resize-pane -D 5 (can also just be resize-p) are the type of commands you can do in your window with bind :
the above lines make those commands possible with quick hot keys so now instead of bind :resize-pane -D 5 to resize your pane you can simple do bind J. bind J will take you to the pane that left of the current pane (-L for left), and the following lines work the same way.

this is only the tip of the iceberg with tmux. You can find out more about tmuxinator and see other config files with the links below.

Testing Private Functions

I’m a big fan of test driven development, but until now I’ve never had a good way of testing private functions. For the purposes of this article, private functions are functions that are not exported.


const id = x => x
const add = (x, y) => x + y

module.exports = add

the id function is private since it’s not exported. The add function is public since it’s exported. If we were to test our module it would look something like this.

const expect = require ('chai').expect
const addService = require ('./addService.js')

describe ('add service', function () {
  it('should add', function (done) {
    const func = addService.add
    expect(func(1, 1)).to.equal(2)

notice the const func = addService.add line. It’s possible to grab the add function because it’s exported. If we tried the same thing with the id function then it would be undefined. We could just export the id function also, but that’s not good practice. You don’t want to expose more than you have to. Luckily we don’t have to with Rewire. Rewire works exactly like require accept that it gets the private functions also. By using rewire we can test all our functions. Our tests would look something like this.

const expect = require ('chai').expect
const rewire = require ('rewire')
const addService = rewire ('./addService.js')

describe ('add service', function () {
  it('should add', function (done) {
    const func = addService.add
    expect(func(1, 1)).to.equal(2)

  it('should return identity with id function', function (done) {
    const func = addService.__get__('id')

Notice instead of require ('./addService.js') We have rewire ('./addService.js'). We just substituted “rewire” for “require”. Rewire has a special getter function that allows it to get private variables. __get__ takes in a String, which is the name of the function or variable you want to get. So in our other test we can get the private function and test it like we would any exported function.

Please forgive the useless example here where we have the identity function that doesn’t do anything and it’s even used.

Fyi, this was not covered, but you will need mocha to run these tests.

npm install -g mocha
mocha nameoftestfile.js

This has been a quick-n-dirty guide to testing private functions from a module. You can see other solutions for this problem here and shout out to barwin for his answer, in which this blog was inspired by.

Killing a process

I was asked about killing a process in an interview and my answer was close, but not quite correct. I realized it wasn’t correct when I found myself actually having to do it just days later. Let’s say you have a node app running that you want to kill.

ps aux | grep "node"

ps aux will list all processes. We pipe it into grep to find our node process.

kill pid

The kill command kills the process. pid is a placeholder for the actual process id that we get from ps aux

This has been a quick-n-dirty guide to finding and killing a process.

UPDATE: I found an even easier way to kill processes. The pidof command will find the pid of a process. You may have to brew install pidof or apt-get install pidof if you don’t have it, but once you do then running the following command will show you the pid of your node app.

pidof node

This may give you multiple pid’s if multiple node apps are running. You can kill them all with one simple command.

kill $(pidof node)

UPDATE: Yet another very easy way to kill a process

$ pgrep node
$ kill 93498

ES6 Destructuring

If you’ve ever dealt with functions that have a ton of arguments then you will appreciate this. Consider the following function

function foo (user, company, invoice, status, bar, stuff, things){ 
      //do some stuff with all thing things

According to Code Complete a function should never have more than 7 arguments. Here we have exactly 7, but it still seems like too many. Perhaps an args object would help.

 function foo (argsObject) {
      //do some stuff with all the things

That might look great, but in practice it’s usually like this:

 function foo (argsObject) {
      var user = argsObject.user;
      var company =;
      var invoice = argsObject.invoice;
      var status = argsObject.status;
      var bar =;
      var stuff = argsObject.stuff;
      var things = argsObject.things;

      //do some stuff with all the things

Sheesh. We might as well go back to having a billion arguments. This is one instance where deconstruction can make our code a little cleaner. With ES6 destructuring we can do this

function foo (argsObject) {
  var {user, company, invoice, status, bar, stuff, things} 
    = argsObject

  //do some stuff with all the things

In the above code, your //do all the things section can look exactly like the first code snippet because you’ll be able to use all the variables with their normal names thanks to the destructuring.

You can test this concept out very easily.

var obj = { foo: 'bar' };
var { foo } = obj;


The output of this should be bar. If it’s not then you may be running an old version of JavaScript.

The same can be done the other way around. Instead of taking an object and getting vars out of it, we can take vars and get an object.

var foo = "foo";
function bar () { console.log("bar"); }

var obj = { foo, bar } //{ foo: "foo", bar: [Function: bar] }

This has been a quick-n-dirty guide to destructuring with ES6. You can find out what else you can do with destructuring here

SSH Fingerprints

Sometimes you may need to get a fingerprint for authentication purposes. This is a quick-n-dirty guide on how to do that.

Run ssh-keygen -lf with the path to your key to get the fingerprint

ssh-keygen -lf ~/.ssh/

The l options stand for “list” and f for “filename”.

However, you may get it back in a format you don’t recognize (SHA perhaps). If you want it to look something like this 00:11:22:33:44:55:6... then you’ll need to add an another argument ( -E mdt).

ssh-keygen -E md5 -lf ~/.ssh/id_rsa_pub

That will give you something like md5 00:11:22:33:44:55:6... with maybe some stuff before and after. Chances are you’ll just need the numbers seperated by colons so just copy the part you need and paste it where you need it.

This has been a quick-n-dirty guide to retrieving ssh fingerprints.

SASS Mixins and Includes

sUsing @mixin and @include can make your stylesheets more DRY.

Here’s an example of a mixin.

@mixin large-text {
  font: {
    family: Arial;
    size: 20px;
    weight: bold;

Here’s how it would possible be used.

.page-title {
  @include large-text;
  padding: 4px;
  margin-top: 10px;

The above compiles to this:

.page-title {
  font-family: Arial;
  font-size: 20px;
  font-weight: bold;
  padding: 4px;
  margin-top: 10px; 

This has been a quick-n-dirty guide to mixins and includes in SCSS. For a more robust article on these concepts go to the source of the above code snippets here

Turbo-charge your Docker Workflow

My last blog post was about how to use docker in your everyday workflow. This post will be how to make it quick and easy. If you are making a lot of changes to your website it is a pain to have to build and run docker images every time you tweak some css. Here’s how I addressed that problem.

First let’s review. Given that we want our image to be named “devi” and our container to be named “devc”. We’d have to run all these commands to test our changes with docker:

docker stop devc
docker rm devc
docker rmi devi
docker build -t devi .
docker run --name devc -p 8080:3000 -d devi

And that’s gets old real quick when you are doing a bunch of tinkering to your site. So let’s take all these commands and put them in a bash script. Let’s name it dbr for “docker build run”.

vim dbr

Add the above commands to the file. Save and quit and then make it executable.

chmod 744 dbr

Now add it to your path. Check your path with echo $PATH and move it to one of the locations included in your path. I moved mine to /usr/local/bin.

mv dbr /usr/local/bin

Once you’ve moved it to a directory that in your path you can run the command from anywhere. But their’s a caveat. Even though you can run the command from anywhere doesn’t mean it will actually be successful from anywhere. You’ll need to run it from the same directory as your Dockerfile is in for it to build the image correctly. So add a line at the beginning of the script to move to the correct directory. You’ll want to use an absolute path. After you do that, you can truly run the command from anywhere in your folder structure and it will delete your old image and build a new one and run a new Docker container with that image.

This has been a quick-n-dirty guide on speeding up development. Docker is the context of this, but the concept of making shell scripts to save time can be applied to all kinds of tasks.

Adding Docker to your Workflow

If you are like me, you may be thinking “Docker’s great, but how exactly do I work it into my workflow?” I quick-n-dirty guide will answer that question. But as always, success is not guaranteed.

After you’ve fixed a bug or added a feature to your application, you’ll want to build a new image with the changes.

docker build -t repo:version2 .

The t option allows you to name the image. “repo” would be the name of your repository or image and “version2” would be the name of your tag. The “.” is easy to miss, but it’s very important. It’s the path. The only argument required by docker’s build command. In this case, we are in the directory we want to build from so we simply use the period.

Now we can run the docker image to test our changes, but first don’t forget to stop your currently running docker container if you have one running on the same port. Or you could run the new one on a new port, but if you want to use the same port run docker stop containerId , containerId being the actual id of the contianer. You can run docker ps to find the container’s id, but you only have to use the first 3 characters. Now you can run your new image.

docker run -p 8080:3000 -d repo:version2

this command assumes your app usually runs on port 3000, but here we are mapping it to port 8080. So on your local machine you can reach your app at localhost:8080 even tho it’s still running on port 3000 in the docker container.
The d option is detached mode so it doesn’t take over your terminal.

Now you can check out your changes and make sure everything is working the way it’s supposed to at http://localhost:8080. If your app isn’t behaving as expected and you want to investigate you can see the logs from your docker container with docker logs containerId with “containerId” being the actual id for the container, or at least the first 3 characters of it. This way you can see any errors that may have been logged.

After you test your changes then do your usual git commands.

git add .
git commit -m "did stuff"
git push origin workingBranch

Then you can push to docker

docker push repo:version2

My next post will be how to make this very easy and quick.