Setting up AWS CLI

Before you read this you may want to check out my other blog (I probably should have before writing this) on signing in with different accounts on aws cli https://jordancotter.wordpress.com/2017/08/08/signing-into-aws-with-aws-cli/. It covers a lot what’s below. First install it.

pip install awscli

for python3: pip3 install awscli

Check your home folder for a .aws folder.

cd ~ && ls -a

If one doesn’t exist create one.

mkdir .aws

put two files in it.

touch .aws/config
touch .aws/credentials

The config file will have a region and output.

cd .aws && vim config

[default]
region=us-west-2
output=json

The credentials file will have your access key and secret

vim credentials

[default]
aws_access_key_id=KJFDd892832DJ23K
aws_secret_access_key=ADfj832jfDSJffj298dsdK

To find your access key sign into the amazon console (aws.amazon.com) with root credentials >> IAM >> Users >> click on the user >> Security credentials tab. You will see your Access Key there. Your Secret is given once when you create the user. You can also create another access key and it will give you another secret for that access key.

When you do anything with the aws cli it will refer the files you created above for your credentials, thus there is no “logging into” aws cli.

this is a vary ad hoc rough guide to document my limited knowledge. This is not meant to be a full tutorial on awscli.

Advertisements

Deploying to Heroku with Docker

Let’s assume you have docker and the heroku-cli installed. Let’s also assume that you have made an app in heroku and you have linked your source code to it with heroku git:remote -a name-of-my-heroku-app-i-made-at-heroku.com. And let’s also assume you have a Dockerfile made. Perhaps something like the file below.

FROM node:6.10.1

#install process manager
RUN npm install pm2 -g

RUN mkdir /app

WORKDIR /app

COPY dist/ /app/

#only install production dependencies
RUN yarn --production

#heroku will ignore this. They handle the ports
EXPOSE 4040

#use process manager to start your service
CMD [ "pm2-runtime", "index.js" ]

The above Dockerfile is for an API. If you are deploying a full fledged web app it may look different. Notice I’m copying the dist folder only. This would require running something like yarn build before building the docker image.

Build the image with docker build -t name-of-image . replacing “name-of-image” with the name you want the image to be. And don’t forget the “.” at the end. Then tag it and push it to heroku, but make sure you are logged in first

heroku container:login
docker tag name-of-image registry.heroku.com/name-of-heroku-app/web
docker push registry.heroku.com/name-of-heroku-app/web

Replace “name-of-heroku-app” with the name of your heroku app. web at the end is the process-type. Normally you would set environment variables in your run command, but heroku does the run command, you just send it the image. So you’ll have to set your environment variables with heroku instead.

To test your docker image locally you’d run something like this.

docker run -d -p 4040:4040 --name my-sweet-container MONGO_HOST=mongod//mysweetmongohost JWT_SECRET=mysweetsecret my-sweet-image
or you could set variables with a file
docker run -d -p 4040:4040 --name my-sweet-container --env-file .env my-sweet-image

But for a heroku deployment just push the image and set the environment variables like so

heroku config:set JWT_SECRET=mysweetsecret MONGO_HOST=mongo//myhost

When you set an environment variable for your heroku app then it will restart your app. You can run heroku logs --tail to check the logs and make sure everything’s running smoothly.

This has been a quick and dirty guide to deploying a node app with docker and heroku

CORS

Cors stands for “Cross-Origin RequestS”. It’s when a client makes a request to a server with a different domain. There are are libraries that make handling this easy (i.e., cors), but it’s better to understand it. Cors can actually be very simple to deal with. It just comes down to setting response headers for the OPTIONS method. The OPTIONS method is the method that runs before any request is made that is not a ‘simple request’ to make sure it’s safe to send data back. A simple request has many restrictions and you can find out more here. The biggest restrictions on a simple request is that they don’t allow POSTS, PUTS, or DELETES. It allows GET requests, but usually if you are getting real data back you will expect json. But “application/json” is a forbidden Content-Type on a simple requests. So simple requests are not that useful which forces us to deal with CORS.

You will need to use some sort of middleware in your server to deal with CORS, and in the middleware you will need to set response headers that will allow your request to go through. Mainly one header “Cross-Access-Allow-Origin”. You will need to set that to the domain of your client. Set “Access-Control-Allow-Headers” to allow headers you need to send such as “Content-Type”. Now the middleware will be used by the OPTIONS method as well as all others. So for a GET request it will be hit twice, once for OPTIONS, and once for GET. So we need to put some logic in the function to handle that. This can be as simple as

if (req.method ==== 'OPTIONS') {
  return res.end("")
}

And that’s basically it. So if you need to call your server from your localhost on port 3000 then your middleware to handle CORS can be as simple as this

// node/express example

app.use((res, req, next) => {
  res.header({
    'Access-Control-Allow-Origin': 'http://localhost:3000'
  , 'Access-Conrol-Allow-Headers': 'Content-Type'
  })
  if (req.method === 'OPTIONS') return res.end("")
  next()
}

It can be that simple, but you may have to be more complicated if you have multiple origins you want to include, or if you have credentials in cookies, or other cases, you’ll need to add more headers. But this will do it for simple needs.

Also, if you do this fix and you still just get an OPTION request with no GET request according to Chrome’s dev tools, then make sure you are looking at all requests and not just xhr requests. Sometimes Chrome doesn’t include them all for some reason.

This has been a quick-n-dirty guide to CORS.

Testing Vuejs with Karma

This is a quick-n-dirty guid to setting up unit testing for Vuejs with Karma. It may be more dirty than quick though so bear with me. This will assume you have an existing project with webpack already and you just want to add unit tests to it.

Let’s start with dependencies. Install all of the following as development dependencies. npm install package --save-dev for npm and yarn add package --dev for yarn.

chai
core-js
cross-env
extract-text-webpack-plugin
karma
karma-coverage
karma-mocha
karma-phantomjs-launcher
karma-phantomjs-shim
karma-sinon-chai
karma-sourcemap-loader
karma-spec-reporter
karma-webpack
mocha
phantomjs-prebuilt
postcss-loader
sinon
sinon-chai
url-loader
vue-style-loader
webpack-bundle-analyzer
webpack-dev-server
webpack-merge

Add a test folder. Inside the folder add a unit folder. The unit folder will have a specs folder a index.js file and a karma.conf.js file.

the index.js file will look like this

require('core-js/shim')
import Vue from 'vue'

Vue.config.productionTip = false

// require all test files (files that ends with .spec.js)
const testsContext = require.context('./specs', true, /\.spec$/)
testsContext.keys().forEach(testsContext)

// require all src files except main.js for coverage.
// you can also change this to match only the subset of files that
// you want coverage for.
const srcContext = require.context('../../src', true, /^\.\/(?!main(\.js)?$)/)
srcContext.keys().forEach(srcContext)

The karma.conf.js will look like this

// This is a karma config file. For more details see
//   http://karma-runner.github.io/0.13/config/configuration-file.html
// we are also using it with karma-webpack
//   https://github.com/webpack/karma-webpack

var webpackConfig = require('../../build/webpack.test.conf')

module.exports = function karmaConfig (config) {
  config.set({
    // to run in additional browsers:
    // 1. install corresponding karma launcher
    //    http://karma-runner.github.io/0.13/config/browsers.html
    // 2. add it to the `browsers` array below.
    browsers: ['PhantomJS'],
    frameworks: ['mocha', 'sinon-chai', 'phantomjs-shim'],
    reporters: ['spec'],
    files: ['./index.js'],
    preprocessors: {
      './index.js': ['webpack', 'sourcemap']
    },
    webpack: webpackConfig,
    webpackMiddleware: {
      noInfo: true
    }
  })
}

Now make a build folder on the root level. Inside of it you will have utils.js, vue-loader.conf.js, webpack.base.conf.js, and wepback.test.conf.js

//utils.js

'use strict'
const path = require('path')
const config = require('../config')
const ExtractTextPlugin = require('extract-text-webpack-plugin')
const packageConfig = require('../package.json')

exports.assetsPath = function (_path) {
  const assetsSubDirectory = process.env.NODE_ENV === 'production'
    ? config.build.assetsSubDirectory
    : config.dev.assetsSubDirectory

  return path.posix.join(assetsSubDirectory, _path)
}

exports.cssLoaders = function (options) {
  options = options || {}

  const cssLoader = {
    loader: 'css-loader',
    options: {
      sourceMap: options.sourceMap
    }
  }

  const postcssLoader = {
    loader: 'postcss-loader',
    options: {
      sourceMap: options.sourceMap
    }
  }

  // generate loader string to be used with extract text plugin
  function generateLoaders (loader, loaderOptions) {
    const loaders = options.usePostCSS ? [cssLoader, postcssLoader] : [cssLoader]

    if (loader) {
      loaders.push({
        loader: loader + '-loader',
        options: Object.assign({}, loaderOptions, {
          sourceMap: options.sourceMap
        })
      })
    }

    // Extract CSS when that option is specified
    // (which is the case during production build)
    if (options.extract) {
      return ExtractTextPlugin.extract({
        use: loaders,
        fallback: 'vue-style-loader'
      })
    } else {
      return ['vue-style-loader'].concat(loaders)
    }
  }

  // https://vue-loader.vuejs.org/en/configurations/extract-css.html
  return {
    css: generateLoaders(),
    postcss: generateLoaders(),
    less: generateLoaders('less'),
    sass: generateLoaders('sass', { indentedSyntax: true }),
    scss: generateLoaders('sass'),
    stylus: generateLoaders('stylus'),
    styl: generateLoaders('stylus')
  }
}

// Generate loaders for standalone style files (outside of .vue)
exports.styleLoaders = function (options) {
  const output = []
  const loaders = exports.cssLoaders(options)

  for (const extension in loaders) {
    const loader = loaders[extension]
    output.push({
      test: new RegExp('\\.' + extension + '$'),
      use: loader
    })
  }

  return output
}

exports.createNotifierCallback = () => {
  const notifier = require('node-notifier')

  return (severity, errors) => {
    if (severity !== 'error') return

    const error = errors[0]
    const filename = error.file && error.file.split('!').pop()

    notifier.notify({
      title: packageConfig.name,
      message: severity + ': ' + error.name,
      subtitle: filename || '',
      icon: path.join(__dirname, 'logo.png')
    })
  }
}
//vue-loader.conf.js

'use strict'
const utils = require('./utils')
const config = require('../config')
const isProduction = process.env.NODE_ENV === 'production'
const sourceMapEnabled = isProduction
  ? config.build.productionSourceMap
  : config.dev.cssSourceMap

module.exports = {
  loaders: utils.cssLoaders({
    sourceMap: sourceMapEnabled,
    extract: isProduction
  }),
  cssSourceMap: sourceMapEnabled,
  cacheBusting: config.dev.cacheBusting,
  transformToRequire: {
    video: ['src', 'poster'],
    source: 'src',
    img: 'src',
    image: 'xlink:href'
  }
}
//webpack.base.conf.js

'use strict'
const path = require('path')
const utils = require('./utils')
const config = require('../config')
const vueLoaderConfig = require('./vue-loader.conf')

function resolve (dir) {
  return path.join(__dirname, '..', dir)
}



module.exports = {
  context: path.resolve(__dirname, '../'),
  entry: {
    app: './src/main.js'
  },
  output: {
    path: config.build.assetsRoot,
    filename: '[name].js',
    publicPath: process.env.NODE_ENV === 'production'
      ? config.build.assetsPublicPath
      : config.dev.assetsPublicPath
  },
  resolve: {
    extensions: ['.js', '.vue', '.json'],
    alias: {
      'vue$': 'vue/dist/vue.esm.js',
      '@': resolve('src'),
    }
  },
  module: {
    rules: [
      {
        test: /\.vue$/,
        loader: 'vue-loader',
        options: vueLoaderConfig
      },
      {
        test: /\.js$/,
        loader: 'babel-loader',
        include: [resolve('src'), resolve('test'), resolve('node_modules/webpack-dev-server/client')]
      },
      {
        test: /\.(png|jpe?g|gif|svg)(\?.*)?$/,
        loader: 'url-loader',
        options: {
          limit: 10000,
          name: utils.assetsPath('img/[name].[hash:7].[ext]')
        }
      },
      {
        test: /\.(mp4|webm|ogg|mp3|wav|flac|aac)(\?.*)?$/,
        loader: 'url-loader',
        options: {
          limit: 10000,
          name: utils.assetsPath('media/[name].[hash:7].[ext]')
        }
      },
      {
        test: /\.(woff2?|eot|ttf|otf)(\?.*)?$/,
        loader: 'url-loader',
        options: {
          limit: 10000,
          name: utils.assetsPath('fonts/[name].[hash:7].[ext]')
        }
      }
    ]
  },
  node: {
    // prevent webpack from injecting useless setImmediate polyfill because Vue
    // source contains it (although only uses it if it's native).
    setImmediate: false,
    // prevent webpack from injecting mocks to Node native modules
    // that does not make sense for the client
    dgram: 'empty',
    fs: 'empty',
    net: 'empty',
    tls: 'empty',
    child_process: 'empty'
  }
}
//webpack.test.conf.js

'use strict'
// This is the webpack config used for unit tests.

const utils = require('./utils')
const webpack = require('webpack')
const merge = require('webpack-merge')
const baseWebpackConfig = require('./webpack.base.conf')

const webpackConfig = merge(baseWebpackConfig, {
  // use inline sourcemap for karma-sourcemap-loader
  module: {
    rules: utils.styleLoaders()
  },
  devtool: '#inline-source-map',
  resolveLoader: {
    alias: {
      // necessary to to make lang="scss" work in test when using vue-loader's ?inject option
      // see discussion at https://github.com/vuejs/vue-loader/issues/724
      'scss-loader': 'sass-loader'
    }
  },
  plugins: [
    new webpack.DefinePlugin({
      'process.env': require('../config/test.env')
    })
  ]
})

// no need for app entry during tests
delete webpackConfig.entry

module.exports = webpackConfig

next you will make a config folder on the root level with the following files

//index.js

'use strict'
// Template version: 1.3.1
// see http://vuejs-templates.github.io/webpack for documentation.

const path = require('path')

module.exports = {
  dev: {

    // Paths
    assetsSubDirectory: 'static',
    assetsPublicPath: '/',
    proxyTable: {},

    // Various Dev Server settings
    host: 'localhost', // can be overwritten by process.env.HOST
    port: 8080, // can be overwritten by process.env.PORT, if port is in use, a free one will be determined
    autoOpenBrowser: false,
    errorOverlay: true,
    notifyOnErrors: true,
    poll: false, // https://webpack.js.org/configuration/dev-server/#devserver-watchoptions-


    /**
     * Source Maps
     */

    // https://webpack.js.org/configuration/devtool/#development
    devtool: 'cheap-module-eval-source-map',

    // If you have problems debugging vue-files in devtools,
    // set this to false - it *may* help
    // https://vue-loader.vuejs.org/en/options.html#cachebusting
    cacheBusting: true,

    cssSourceMap: true
  },

  build: {
    // Template for index.html
    index: path.resolve(__dirname, '../dist/index.html'),

    // Paths
    assetsRoot: path.resolve(__dirname, '../dist'),
    assetsSubDirectory: 'static',
    assetsPublicPath: '/',

    /**
     * Source Maps
     */

    productionSourceMap: true,
    // https://webpack.js.org/configuration/devtool/#production
    devtool: '#source-map',

    // Gzip off by default as many popular static hosts such as
    // Surge or Netlify already gzip all static assets for you.
    // Before setting to `true`, make sure to:
    // npm install --save-dev compression-webpack-plugin
    productionGzip: false,
    productionGzipExtensions: ['js', 'css'],

    // Run the build command with an extra argument to
    // View the bundle analyzer report after build finishes:
    // `npm run build --report`
    // Set to `true` or `false` to always turn it on or off
    bundleAnalyzerReport: process.env.npm_config_report
  }
}
//dev.env.js

'use strict'
const merge = require('webpack-merge')
const prodEnv = require('./prod.env')

module.exports = merge(prodEnv, {
  NODE_ENV: '"development"'
})
//test.env.js

'use strict'
const merge = require('webpack-merge')
const devEnv = require('./dev.env')

module.exports = merge(devEnv, {
  NODE_ENV: '"testing"'
})
//prod.env.js

'use strict'
module.exports = {
  NODE_ENV: '"production"'
}

Now you will need to actually create a test. Inside of the test/unit/spec folder created helloworld.spec.js. Every test will have follow this naming convention to be run. thing-to-test.spec.js

//helloworld.spec.js
describe('HelloWorld', () => {
  it('should run and succeed', () => {
    expect('hello world').to.equal('hello world')
})

Now inside of your package.json file add this to your scripts.

"unit": "cross-env BABEL_ENV=test karma start test/unit/karma.conf.js --single-run"

Now run yarn unit to run your tests.

If you get an error about node-sass then you need to rebuild it.
npm build node-sass for npm
yarn add --force node-sass for yarn

Cookies!!

Everybody loves cookies, but this article isn’t about those kind. It’s about browser cookies. Web browsers use cookies to save values in string form. Local Storage is newer and easier to work with for saving values, but the benefit of cookies is that they automatically get sent with every request so they are a good place to store things like session tokens. They are actually vary simple.

To look at the current cookies (in Chrome) of a website open the console and click on Application, then Cookies. Then click to expand and click on the url. The current cookies should show on the right.
Screen Shot 2018-01-01 at 7.18.38 PM
To get the cookies programmatically simply use document.cookie. You can try this out in the console window.
You’ll see that it is a string of key-value pairs separated by semicolons.

To add a cookie you can manually add it by typing it into the cookie section of the console, but this isn’t very useful. To add one programatically simply use document.cookie = "key=value" for example document.cookie = "foo=foo" would add a cookie named foo with a value of “foo”.

As I stated earlier, document.cookie will give you the cookies in string form separated by semicolons. So to retrieve a specific cookie you’ll have to use some string manipulation.

First, split by the name of the cookie plus “=”.

document.cookie.split("foo=")

that will give you an array of two strings. All the cookies before “foo=” in the first element of the array and everything after “foo=” in the second element of the array. This will be where are value is. We can use pop() to retrieve the last element in the array where are value you will be.

Unless the value is the last (which we can’t ever be sure if in the middle of execution), then we’ll need to split again. This time on “;” since that’s what separates the cookies. After this split, the value should be the first element fo the array that’s returned. With shift() we can retrieve it. Let’s put his all together.

document.cookie.split("key=").pop().split(";").shift()

The above code should get you the value of any cookie if you replace “key” with the name of the cookie.

Expiration

You can make cookies expire. This is useful for tokens that authenticate users. After they expire, the site logs the user out.

While before we simply did document.cookie = "foo=foo", if we wanted foo to expire then we would do this instead.

document.cookie = "foo=foo; expires=" + date.toGMTString()

date would be a date that you set. For example, if you wanted it to expire in 3 days you’d do this.


var date = new Date()
date.setTime(date.getTime() + 3 * 24 * 60 * 60 * 1000)
document.cookie = "foo=foo; expires=" + date.toGMTSTring()

This has been a quick-n-dirty guide on Browers cookies.

Decoding in Elm

When you make a call to an API in elm you have to decode the result into types. This can be hard to wrap your head around at first and it takes more steps than you would expect (certainly more than the one step it takes in javascript to turn json into usable data: JSON.parse)

When you get a response from an api it will be in the form of a (Result Http.Error String) type. You’ll have to parse this with a case statement to see if it has errors.

case result of
    Ok jsonBlob ->
        --do something with json blob like decode it. 
   
    Err err -> 
        --handle error

This will only catch errors from the server. status 500 for example, among other errors. If it returns a status 200 and the response is in this form:

{ success: false, error: "something went wrong" }

You’ll have to handle that error yourself in the Ok case.

Let’s say it returns an Person. The json may look something like this.

"{ \"success\": true, \"payload\": { \"id\": 1, \"name\": \"bob\", \"age\": 29 } }"

The Json.Decode elm package provides us with a decodeString function.

decodeString : Decoder a -> String -> Result String a

It takes in a Decoder and a String and returns a Result.

The Decoder that you pass in will look like this

playerDecoder : Decoder Player
playerDecoder = 
    at ["payload"]
        (decode Player
            |> required "id" int
            |> required "name" string
            |> required "age" int
        )

It’s important to note that order matters here. You will require the properties in the order they are defined on the Player type. decode, required, int, string, and at are all functions and types that come from the Json.Decode and Json.Decode.Pipeline packages. So the above code assumes you have both of those packages installed and imported and that you have a Player type defined. The at function tells where the object is located. In this case it's in a property called payload. Notice "payload" is in an array. The at function can handle nested data. For example, if the Player was at payload.data then we would have both properties in the array like so.

at ["payload", "data"]

The rest is self-explanatory. We are decoding the string into a Player. We are requiring that the Player has id, name, and age. The json blob may have more properties than this, but this is all we care about. If the Player type has more properties they need to be included. But if you don't need them you can use the optional function instead of the required function. The optional function is just like the required function except that it requires one more argument, a default value in case the optional property doesn't exist. So if it's a list you may have [] as your third argument. A string will usually have "" as the third argument.

So let's use the decoder we made. Remember we made it for the decodeString function. It will be the 2nd argument.

decodeString decodePlayer jsonBlob

There's still more to be done. The above function call will give us a result, not a Player, so we need to parse the Result to get the Player.

case decodeString playerDecoder jsonBlob of
    Ok value -> 
        value
    
    Err err -> 
        --do something with error or return default type

The above code will return the value if the result is okay. In this case, the value will be a Player.

Now let's put it all together. Let's say we want to update our model with the new player.

updateModel : (Result Http.Error String) -> Model -> Model
updateModel result model =
    let 
        playerDecoder = 
            at ["payload"]
                (decode Player
                    |> required "id" int
                    |> required "name" string
                    |> required "age" int
                )
        decodePlayer jsonBlob =
            case decodeString playerDecoder jsonBlob of
                Ok value -> 
                    value
    
                Err err -> 
                    Player 0 "" 0

    in 
        case result of
            Ok jsonBlob ->
                { model | player = decodePlayer jsonBlob }
   
            Err err -> 
                { model | error = decodeError err }

The updateModel function updates the model with either a player or an error. decodeError would be function similar to decodePlayer except that it decodes an error instead of a player. It's not defined in the snippet above, but it would use the same concepts. In decodePlayer the Err means something was wrong with your decoder, so you must have made a mistake in your decodePlayer function. You can do something more sophisticated with the error, but for now we're just returning a default player.

This has been a quick-n-dirty guide to decoding json payloads in Elm.

Graphql request with JSON

GraphiQL makes GraphQL queries very easy, almost deceptively easy. But it’s basically just for testing. What happens when you want to send a real request using JSON. Well this post will show you how and how the json version differs from the GraphiQL version.

Let’s say you have a mutation that you run on GraphiQL.

mutation {
  createRole(name: "arole", isDefault: false) {
    id
  }
}

That will create a role depending on how you defined your mutation. This will create a role with the name “arole” and return it’s id. This is exactly how it looks in GraphiQL. Let’s look at the json version as it would appear in postman.

{
  "query": "mutation { createRole(name: \"arole\", isDefault: false) { id }}"
}

In JSON, you basically put the entire GraphiQL version of the query into a “query” property in string form. You cannot use single ticks (‘) for strings. You must use double quotes and escape them (at least in postman).

This has been a quick-n-dirty guide to doing graphql requests with json.

Going Down the Rabbit Hole…

I was working on a project in which I inherited a project that involved terraform. In the project, modules were used. The one other time I’ve used Terraform, I used resources. Modules seemed similar, but I hadn’t used them before so I wanted to know how they differed. I found an article online about it that was a part in a larger series. The previous part in the series was about why they chose terraform instead of Chef or ansible among others. Both of those tools are also used in my circle so I was curious on their take. So I started to read that article when Packer was brought up. I had never heard of Packer so I googled it. While reading about it, I found it was very similar to Docker. So I wondered how’d they’d differ and which one was better so I googled “docker vs packer” so now I’m 4 topics removed from my original task and reading about docker vs packer when I first set out to learn about terraform modules.

I think it’s good to go beyond the bare minimum one must learn to accomplish a task, but where do we draw the line? How far down the Rabbit hole do we go? We have to cut ourselves off at some point or we’d be doing random research for hours. I don’t know if I have the right answer, but in this instance, I turned back from the docker vs packer research and went back to reading why terraform was a good choice over the others, which is also a bunny trail, but one that is still related to my current task.

So when you go on your bunny trails (which is okay. That’s how we learn), just ask yourself as you go deeper “Is this related to my current task”? If not then stop there and recursively finish your research to get back to your current task. Of course this is only when you are on someone else’s time (like a boss or client). If this is your own time then study away and soak up the random knowledge.

Solving the ../../../../lib/myService.js Problem in Node

If you’ve ever required a service in a large project you are familiar with this problem. The following solution uses absolute paths instead of relative paths to solve this problem.

In your app.js (or index.js or server.js or main.js or whatever you call your main js file in the root directory of your project) add the following function.

global.include = file => require(__dirname+'/'+file);

Now instead of figuring out how many directories you are from your lib folder and typing this out

const myService = require('../../../lib/myService');

You can simply do the following

const myService = include('lib/myService');

Shout out to amoniker for writing the article that described this solution. https://coderwall.com/p/th6ssq/absolute-paths-require

Signing into AWS with aws-cli and multiple accounts

aws-cli allows you to interact with AWS from your command line. First thing you have to do is login. But if you have multiple AWS accounts you want to make sure you are signing into the right one. You should have two files in your .aws directory in your home folder. You should have credentials and config. credentials should look something like this.

[default]
aws_access_key_id = C7E...
aws_secret_access_key = YTgn....

But if you have multiple accounts then it should look more like this.

[default]
aws_access_key_id = C7E...
aws_secret_access_key = YTgn....
[account1]
aws_access_key_id = 4FD...
aws_secret_access_key = 02e....
[account2]
aws_access_key_id = 5E8...
aws_secret_access_key = ab0c....

You're ~/.aws/config file is generated with aws configure and should look something like this.

[default]
region = us-east-1
output = json
[account1]
region = us-east-1
output = json
[account2]
region = us-east-2
output = json

So now when you login to aws you can specify a profile.

$(aws ecr get-login --no-include-email --region us-east-1 --profile account2)

You'd think if your region was in your config you wouldn't need it. This wasn't the case for me. Also --no-include-email seems to be necessary as well, although on a server once, I was able to simply use $(aws ecr get-login) and it worked. See the image below on what worked for me and what didn't.
Screen Shot 2017-08-08 at 3.07.19 PM