Testing Private Functions

I’m a big fan of test driven development, but until now I’ve never had a good way of testing private functions. For the purposes of this article, private functions are functions that are not exported.

//addService.js

const id = x => x
const add = (x, y) => x + y

module.exports = add

the id function is private since it’s not exported. The add function is public since it’s exported. If we were to test our module it would look something like this.

const expect = require ('chai').expect
const addService = require ('./addService.js')

describe ('add service', function () {
  
  it('should add', function (done) {
    const func = addService.add
    expect(func(1, 1)).to.equal(2)
  })
})

notice the const func = addService.add line. It’s possible to grab the add function because it’s exported. If we tried the same thing with the id function then it would be undefined. We could just export the id function also, but that’s not good practice. You don’t want to expose more than you have to. Luckily we don’t have to with Rewire. Rewire works exactly like require accept that it gets the private functions also. By using rewire we can test all our functions. Our tests would look something like this.

const expect = require ('chai').expect
const rewire = require ('rewire')
const addService = rewire ('./addService.js')

describe ('add service', function () {
  
  it('should add', function (done) {
    const func = addService.add
    expect(func(1, 1)).to.equal(2)
    done()
  })

  it('should return identity with id function', function (done) {
    const func = addService.__get__('id')
    expect(func(1)).to.equal(1)
    done()
  })
})

Notice instead of require ('./addService.js') We have rewire ('./addService.js'). We just substituted “rewire” for “require”. Rewire has a special getter function that allows it to get private variables. __get__ takes in a String, which is the name of the function or variable you want to get. So in our other test we can get the private function and test it like we would any exported function.

Please forgive the useless example here where we have the identity function that doesn’t do anything and it’s even used.

Fyi, this was not covered, but you will need mocha to run these tests.

npm install -g mocha
mocha nameoftestfile.js

This has been a quick-n-dirty guide to testing private functions from a module. You can see other solutions for this problem here https://stackoverflow.com/questions/22097603/unit-testing-of-private-functions-with-mocha-and-node-js and shout out to barwin for his answer, in which this blog was inspired by.

Killing a process

I was asked about killing a process in an interview and my answer was close, but not quite correct. I realized it wasn’t correct when I found myself actually having to do it just days later. Let’s say you have a node app running that you want to kill.

ps aux | grep "node"

ps aux will list all processes. We pipe it into grep to find our node process.

kill pid

The kill command kills the process. pid is a placeholder for the actual process id that we get from ps aux

This has been a quick-n-dirty guide to finding and killing a process.

UPDATE: I found an even easier way to kill processes. The pidof command will find the pid of a process. You may have to brew install pidof or apt-get install pidof if you don’t have it, but once you do then running the following command will show you the pid of your node app.

pidof node

This may give you multiple pid’s if multiple node apps are running. You can kill them all with one simple command.

kill $(pidof node)

UPDATE: Yet another very easy way to kill a process

$ pgrep node
93498
$ kill 93498

ES6 Destructuring

If you’ve ever dealt with functions that have a ton of arguments then you will appreciate this. Consider the following function

function foo (user, company, invoice, status, bar, stuff, things){ 
      //do some stuff with all thing things
}

According to Code Complete a function should never have more than 7 arguments. Here we have exactly 7, but it still seems like too many. Perhaps an args object would help.

 function foo (argsObject) {
      //do some stuff with all the things
}

That might look great, but in practice it’s usually like this:

 function foo (argsObject) {
      var user = argsObject.user;
      var company = argsObject.company;
      var invoice = argsObject.invoice;
      var status = argsObject.status;
      var bar = argsObject.bar;
      var stuff = argsObject.stuff;
      var things = argsObject.things;

      //do some stuff with all the things
}

Sheesh. We might as well go back to having a billion arguments. This is one instance where deconstruction can make our code a little cleaner. With ES6 destructuring we can do this

function foo (argsObject) {
  var {user, company, invoice, status, bar, stuff, things} 
    = argsObject

  //do some stuff with all the things
}

In the above code, your //do all the things section can look exactly like the first code snippet because you’ll be able to use all the variables with their normal names thanks to the destructuring.

You can test this concept out very easily.

var obj = { foo: 'bar' };
var { foo } = obj;

console.log(foo);

The output of this should be bar. If it’s not then you may be running an old version of JavaScript.

The same can be done the other way around. Instead of taking an object and getting vars out of it, we can take vars and get an object.

var foo = "foo";
function bar () { console.log("bar"); }

var obj = { foo, bar } //{ foo: "foo", bar: [Function: bar] }

This has been a quick-n-dirty guide to destructuring with ES6. You can find out what else you can do with destructuring here https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Destructuring_assignment

SSH Fingerprints

Sometimes you may need to get a fingerprint for authentication purposes. This is a quick-n-dirty guide on how to do that.

Run ssh-keygen -lf with the path to your key to get the fingerprint

ssh-keygen -lf ~/.ssh/id_rsa.pub

The l options stand for “list” and f for “filename”.

However, you may get it back in a format you don’t recognize (SHA perhaps). If you want it to look something like this 00:11:22:33:44:55:6... then you’ll need to add an another argument ( -E mdt).

ssh-keygen -E md5 -lf ~/.ssh/id_rsa_pub

That will give you something like md5 00:11:22:33:44:55:6... with maybe some stuff before and after. Chances are you’ll just need the numbers seperated by colons so just copy the part you need and paste it where you need it.

This has been a quick-n-dirty guide to retrieving ssh fingerprints.

SASS Mixins and Includes

sUsing @mixin and @include can make your stylesheets more DRY.

Here’s an example of a mixin.

@mixin large-text {
  font: {
    family: Arial;
    size: 20px;
    weight: bold;
  }
}

Here’s how it would possible be used.

.page-title {
  @include large-text;
  padding: 4px;
  margin-top: 10px;
}

The above compiles to this:

.page-title {
  font-family: Arial;
  font-size: 20px;
  font-weight: bold;
  padding: 4px;
  margin-top: 10px; 
}

This has been a quick-n-dirty guide to mixins and includes in SCSS. For a more robust article on these concepts go to the source of the above code snippets here http://sass-lang.com/documentation/file.SASS_REFERENCE.html#including_a_mixin

Turbo-charge your Docker Workflow

My last blog post was about how to use docker in your everyday workflow. This post will be how to make it quick and easy. If you are making a lot of changes to your website it is a pain to have to build and run docker images every time you tweak some css. Here’s how I addressed that problem.

First let’s review. Given that we want our image to be named “devi” and our container to be named “devc”. We’d have to run all these commands to test our changes with docker:

docker stop devc
docker rm devc
docker rmi devi
docker build -t devi .
docker run --name devc -p 8080:3000 -d devi

And that’s gets old real quick when you are doing a bunch of tinkering to your site. So let’s take all these commands and put them in a bash script. Let’s name it dbr for “docker build run”.

vim dbr

Add the above commands to the file. Save and quit and then make it executable.

chmod 744 dbr

Now add it to your path. Check your path with echo $PATH and move it to one of the locations included in your path. I moved mine to /usr/local/bin.

mv dbr /usr/local/bin

Once you’ve moved it to a directory that in your path you can run the command from anywhere. But their’s a caveat. Even though you can run the command from anywhere doesn’t mean it will actually be successful from anywhere. You’ll need to run it from the same directory as your Dockerfile is in for it to build the image correctly. So add a line at the beginning of the script to move to the correct directory. You’ll want to use an absolute path. After you do that, you can truly run the command from anywhere in your folder structure and it will delete your old image and build a new one and run a new Docker container with that image.

This has been a quick-n-dirty guide on speeding up development. Docker is the context of this, but the concept of making shell scripts to save time can be applied to all kinds of tasks.

Adding Docker to your Workflow

If you are like me, you may be thinking “Docker’s great, but how exactly do I work it into my workflow?” I quick-n-dirty guide will answer that question. But as always, success is not guaranteed.

After you’ve fixed a bug or added a feature to your application, you’ll want to build a new image with the changes.

docker build -t repo:version2 .

The t option allows you to name the image. “repo” would be the name of your repository or image and “version2” would be the name of your tag. The “.” is easy to miss, but it’s very important. It’s the path. The only argument required by docker’s build command. In this case, we are in the directory we want to build from so we simply use the period.

Now we can run the docker image to test our changes, but first don’t forget to stop your currently running docker container if you have one running on the same port. Or you could run the new one on a new port, but if you want to use the same port run docker stop containerId , containerId being the actual id of the contianer. You can run docker ps to find the container’s id, but you only have to use the first 3 characters. Now you can run your new image.

docker run -p 8080:3000 -d repo:version2

this command assumes your app usually runs on port 3000, but here we are mapping it to port 8080. So on your local machine you can reach your app at localhost:8080 even tho it’s still running on port 3000 in the docker container.
The d option is detached mode so it doesn’t take over your terminal.

Now you can check out your changes and make sure everything is working the way it’s supposed to at http://localhost:8080. If your app isn’t behaving as expected and you want to investigate you can see the logs from your docker container with docker logs containerId with “containerId” being the actual id for the container, or at least the first 3 characters of it. This way you can see any errors that may have been logged.

After you test your changes then do your usual git commands.

git add .
git commit -m "did stuff"
git push origin workingBranch

Then you can push to docker

docker push repo:version2

My next post will be how to make this very easy and quick.

Dockerizing a Node App

So you have a node app and you want to run it in a docker container. This is a very quick and dirty guide that will show you how. Success is not guaranteed, but hopefully it can give you a rough idea of what you need to do.

Prerequisites:

  1. Node app
  2. Docker installed

Given that you have a Node app and Docker installed, making your app run in a docker container is very simple. First you will need a Dockerfile. This file will give instructions on setting up the environment. cd into the directory of your node app and create the Dockerfile.

vim Dockerfile

Your Dockerfile should look something like this:

FROM node
COPY . /app
WORKDIR /app
RUN npm install
ENTRYPOINT ["npm", "start"]

FROM node means your base container will be a node container. Basically a container that already has nodejs installed and configured.

The next two lines copy all of the contents in the current folder on your machine to an app folder and sets the working directory to the app folder where all the proceeding commands will be run.

RUN precedes any commands. so RUN npm install will install all your dependencies for your app.

The last line sets the entry point. This will be the command that starts the app, in this case, npm start.

save the file and quit.

Now that we have the docker file we can build our image with

docker build -t our-sweet-image .

If you have a docker account it may be more like

docker build -t username/our-sweet-image .

The t option is what allows us to name the image. You can run docker images to verify that the image was created.

Now that we have the image we can run it with

docker run username/our-sweet-image.

But that will run directly in your terminal and you’ll have to open a new terminal window if you want to keep using your command line so let’s add the -d options so that it runs in detached mode.

docker run -d username/our-sweet-image

But chances are, your node app runs on a specific port. So if you need to set ports then run the image with this command

docker run -p 8080:3000 -d username/our-sweet-image

This assumes your node app is set up to run on port 3000, but now you can reach your node app at http://localhost:8080 (or http://127.0.0.1:8080). So to recap. 8080 is the port that runs your app locally. But the docker container still runs it on 3000. So if you went into your container (which you can do with docker exec -it bash. You can check your container name with docker ps. You can also use your container id instead of the name I believe.) and ran curl http://localhost:3000 then you would get the html of your node app.

There are still a couple things you may need for your app to run correctly. If your app uses SASS then you may need this RUN npm rebuild node-sass after your npm install command.
If your app runs gulp tasks and/or tests then you will need to add the command that runs those tests/tasks just before the entry point command RUN gulp build && gulp tasks or if you have that in an npm build script you will just need to add RUN npm build

Hope this helps. My next blog will be about how to update images and push them to your docker account.