Cookies!!

Everybody loves cookies, but this article isn’t about those kind. It’s about browser cookies. Web browsers use cookies to save values in string form. They are actually vary simple.

To look at the current cookies (in Chrome) of a website open the console and click on Application, then Cookies. Then click to expand and click on the url. The current cookies should show on the right.
Screen Shot 2018-01-01 at 7.18.38 PM
To get the cookies programmatically simply use document.cookie. You can try this out in the console window.
You’ll see that it is a string of key-value pairs separated by semicolons.

To add a cookie you can manually add it by typing it into the cookie section of the console, but this isn’t very useful. To add one programatically simply use document.cookie = "key=value" for example document.cookie = "foo=foo" would add a cookie named foo with a value of “foo”.

As I stated earlier, document.cookie will give you the cookies in string form separated by semicolons. So to retrieve a specific cookie you’ll have to use some string manipulation.

First, split by the name of the cookie plus “=”.

document.cookie.split("foo=")

that will give you an array of two strings. All the cookies before “foo=” in the first element of the array and everything after “foo=” in the second element of the array. This will be where are value is. We can use pop() to retrieve the last element in the array where are value you will be.

Unless the value is the last (which we can’t ever be sure if in the middle of execution), then we’ll need to split again. This time on “;” since that’s what separates the cookies. After this split, the value should be the first element fo the array that’s returned. With shift() we can retrieve it. Let’s put his all together.

document.cookie.split("key=").pop().split(";").shift()

The above code should get you the value of any cookie if you replace “key” with the name of the cookie.

Expiration

You can make cookies expire. This is useful for tokens that authenticate users. After they expire, the site logs the user out.

While before we simply did document.cookie = "foo=foo", if we wanted foo to expire then we would do this instead.

document.cookie = "foo=foo; expires=" + date.toGMTString()

date would be a date that you set. For example, if you wanted it to expire in 3 days you’d do this.


var date = new Date()
date.setTime(date.getTime() + 3 * 24 * 60 * 60 * 1000)
document.cookie = "foo=foo; expires=" + date.toGMTSTring()

This has been a quick-n-dirty guide on Browers cookies.

Advertisements

Decoding in Elm

When you make a call to an API in elm you have to decode the result into types. This can be hard to wrap your head around at first and it takes more steps than you would expect (certainly more than the one step it takes in javascript to turn json into usable data: JSON.parse)

When you get a response from an api it will be in the form of a (Result Http.Error String) type. You’ll have to parse this with a case statement to see if it has errors.

case result of
    Ok jsonBlob ->
        --do something with json blob like decode it. 
   
    Err err -> 
        --handle error

This will only catch errors from the server. status 500 for example, among other errors. If it returns a status 200 and the response is in this form:

{ success: false, error: "something went wrong" }

You’ll have to handle that error yourself in the Ok case.

Let’s say it returns an Person. The json may look something like this.

"{ \"success\": true, \"payload\": { \"id\": 1, \"name\": \"bob\", \"age\": 29 } }"

The Json.Decode elm package provides us with a decodeString function.

decodeString : Decoder a -> String -> Result String a

It takes in a Decoder and a String and returns a Result.

The Decoder that you pass in will look like this

playerDecoder : Decoder Player
playerDecoder = 
    at ["payload"]
        (decode Player
            |> required "id" int
            |> required "name" string
            |> required "age" int
        )

It’s important to note that order matters here. You will require the properties in the order they are defined on the Player type. decode, required, int, string, and at are all functions and types that come from the Json.Decode and Json.Decode.Pipeline packages. So the above code assumes you have both of those packages installed and imported and that you have a Player type defined. The at function tells where the object is located. In this case it's in a property called payload. Notice "payload" is in an array. The at function can handle nested data. For example, if the Player was at payload.data then we would have both properties in the array like so.

at ["payload", "data"]

The rest is self-explanatory. We are decoding the string into a Player. We are requiring that the Player has id, name, and age. The json blob may have more properties than this, but this is all we care about. If the Player type has more properties they need to be included. But if you don't need them you can use the optional function instead of the required function. The optional function is just like the required function except that it requires one more argument, a default value in case the optional property doesn't exist. So if it's a list you may have [] as your third argument. A string will usually have "" as the third argument.

So let's use the decoder we made. Remember we made it for the decodeString function. It will be the 2nd argument.

decodeString decodePlayer jsonBlob

There's still more to be done. The above function call will give us a result, not a Player, so we need to parse the Result to get the Player.

case decodeString playerDecoder jsonBlob of
    Ok value -> 
        value
    
    Err err -> 
        --do something with error or return default type

The above code will return the value if the result is okay. In this case, the value will be a Player.

Now let's put it all together. Let's say we want to update our model with the new player.

updateModel : (Result Http.Error String) -> Model -> Model
updateModel result model =
    let 
        playerDecoder = 
            at ["payload"]
                (decode Player
                    |> required "id" int
                    |> required "name" string
                    |> required "age" int
                )
        decodePlayer jsonBlob =
            case decodeString playerDecoder jsonBlob of
                Ok value -> 
                    value
    
                Err err -> 
                    Player 0 "" 0

    in 
        case result of
            Ok jsonBlob ->
                { model | player = decodePlayer jsonBlob }
   
            Err err -> 
                { model | error = decodeError err }

The updateModel function updates the model with either a player or an error. decodeError would be function similar to decodePlayer except that it decodes an error instead of a player. It's not defined in the snippet above, but it would use the same concepts. In decodePlayer the Err means something was wrong with your decoder, so you must have made a mistake in your decodePlayer function. You can do something more sophisticated with the error, but for now we're just returning a default player.

This has been a quick-n-dirty guide to decoding json payloads in Elm.

Graphql request with JSON

GraphiQL makes GraphQL queries very easy, almost deceptively easy. But it’s basically just for testing. What happens when you want to send a real request using JSON. Well this post will show you how and how the json version differs from the GraphiQL version.

Let’s say you have a mutation that you run on GraphiQL.

mutation {
  createRole(name: "arole", isDefault: false) {
    id
  }
}

That will create a role depending on how you defined your mutation. This will create a role with the name “arole” and return it’s id. This is exactly how it looks in GraphiQL. Let’s look at the json version as it would appear in postman.

{
  "query": "mutation { createRole(name: \"arole\", isDefault: false) { id }}"
}

In JSON, you basically put the entire GraphiQL version of the query into a “query” property in string form. You cannot use single ticks (‘) for strings. You must use double quotes and escape them (at least in postman).

This has been a quick-n-dirty guide to doing graphql requests with json.

Going Down the Rabbit Hole…

I was working on a project in which I inherited a project that involved terraform. In the project, modules were used. The one other time I’ve used Terraform, I used resources. Modules seemed similar, but I hadn’t used them before so I wanted to know how they differed. I found an article online about it that was a part in a larger series. The previous part in the series was about why they chose terraform instead of Chef or ansible among others. Both of those tools are also used in my circle so I was curious on their take. So I started to read that article when Packer was brought up. I had never heard of Packer so I googled it. While reading about it, I found it was very similar to Docker. So I wondered how’d they’d differ and which one was better so I googled “docker vs packer” so now I’m 4 topics removed from my original task and reading about docker vs packer when I first set out to learn about terraform modules.

I think it’s good to go beyond the bare minimum one must learn to accomplish a task, but where do we draw the line? How far down the Rabbit hole do we go? We have to cut ourselves off at some point or we’d be doing random research for hours. I don’t know if I have the right answer, but in this instance, I turned back from the docker vs packer research and went back to reading why terraform was a good choice over the others, which is also a bunny trail, but one that is still related to my current task.

So when you go on your bunny trails (which is okay. That’s how we learn), just ask yourself as you go deeper “Is this related to my current task”? If not then stop there and recursively finish your research to get back to your current task. Of course this is only when you are on someone else’s time (like a boss or client). If this is your own time then study away and soak up the random knowledge.

Solving the ../../../../lib/myService.js Problem in Node

If you’ve ever required a service in a large project you are familiar with this problem. The following solution uses absolute paths instead of relative paths to solve this problem.

In your app.js (or index.js or server.js or main.js or whatever you call your main js file in the root directory of your project) add the following function.

global.include = file => require(__dirname+'/'+file);

Now instead of figuring out how many directories you are from your lib folder and typing this out

const myService = require('../../../lib/myService');

You can simply do the following

const myService = include('lib/myService');

Shout out to amoniker for writing the article that described this solution. https://coderwall.com/p/th6ssq/absolute-paths-require

Signing into AWS with aws-cli and multiple accounts

aws-cli allows you to interact with AWS from your command line. First thing you have to do is login. But if you have multiple AWS accounts you want to make sure you are signing into the right one. You should have two files in your .aws directory in your home folder. You should have credentials and config. credentials should look something like this.

[default]
aws_access_key_id = C7E...
aws_secret_access_key = YTgn....

But if you have multiple accounts then it should look more like this.

[default]
aws_access_key_id = C7E...
aws_secret_access_key = YTgn....
[account1]
aws_access_key_id = 4FD...
aws_secret_access_key = 02e....
[account2]
aws_access_key_id = 5E8...
aws_secret_access_key = ab0c....

You're ~/.aws/config file is generated with aws configure and should look something like this.

[default]
region = us-east-1
output = json
[account1]
region = us-east-1
output = json
[account2]
region = us-east-2
output = json

So now when you login to aws you can specify a profile.

$(aws ecr get-login --no-include-email --region us-east-1 --profile account2)

You'd think if your region was in your config you wouldn't need it. This wasn't the case for me. Also --no-include-email seems to be necessary as well, although on a server once, I was able to simply use $(aws ecr get-login) and it worked. See the image below on what worked for me and what didn't.
Screen Shot 2017-08-08 at 3.07.19 PM

Changing your Ruby version

Sometimes you’ll need different versions of Ruby for different projects. chruby allows you to have multiple versions at once and choose which one to use at any given time. This tutorial will assume you are on a mac (sorry Linux and Windows users). First install ruby-install and chruby.

brew install chruby --HEAD
brew install ruby-install --HEAD

Then install the needed version of Ruby. Let’s assume you need 2.3.1.

ruby-install ruby 2.3.1

Now change to that version of Ruby. If you use the command chruby you won’t see the new version in the list of versions you have until you restart your shell. So let’s restart your shell to refresh your list of Ruby versions displayed by chruby.

exec bash -l

Now if you use chruby you will see the version you just downloaded. So now you can switch to that version.

chruby 2.3.1

But your gems will still be on the old version. Let’s get the gems for the new version.

gem install bundler

Now let’s install the gems for 2.3.1.

bundle

And that’s it. You should be all set with your new version of Ruby. If you want to change back to another version of Ruby that you have installed you should not have to repeat the bundle commands.

Ways to handle terraform variables

There are 3 ways you can use terraform commands. It works the same whether you use destroy, plan, or apply. We’ll just use terraform apply in this post.

Each command requires variables. So when we say “3 ways to run these commands” we really mean 3 ways to handle the variables.

Way One: prompt

$ terraform apply

This command does not pass any variables so you will be prompted for things like you access key and secret key and stuff like that. You can simply answer the prompts and the command will run.

Way Two: arguments

$ terraform apply -var "access_key=ADfakdjafioauvuasvjekjfjd"

If you have several variables to pass then you can use a backslash to go to a new line.

$ terraform apply \
-var "access_key=DAJFKEJfkajdfiadlajkjf823" \
-var "key_name=my_key" \
-var "last_var=foobar"

You can use this method to override variables (if you have them defined elsewhere)

Way Three: terraform.tfvars

$ vim terraform.tfvars

access_key = "DAJFKEJfkajdfiadlajkjf823" 
key_name = "my_key" 
last_var = "foobar"

Then you can simply run terraform apply with no arguments and no prompts.

Also keep in mind that you can use environmnt variables for any of these.

$ export DO_PAT={YOUR_PERSONAL_ACCESS_TOKEN}

Then in your command $ terraform apply -var "do_token=${DO_PAT}"

This is just the tip of the iceberg when it comes to variables in Terraform. You can import them from modules and other places, but I won't cover that here...maybe I'll add it later or do another post about it.

Agent forwarding

Agent forwarding is a security measure in which you have to go through a preliminary server in order to get the server that you really need to reach. The Server being protected will not have a public IP address. You must ssh into the preliminary server with that servers public IP and then from there ssh into the protected server using it’s private IP. Let’s see what that looks like.

$ eval ssh-agent
...
$ ssh-add ~/.ssh/id_rsa
Identity added: /Users/user/.ssh/id_rsa (/Users/user/.ssh/id_rsa)
$ ssh -A -i ~/.ssh/id_rsa ubuntu@
...
ubuntu:~$ ssh ubuntu@

~/.ssh/id_rsa is the most common path for private keys, but if you have your private key somewhere else then you need to use that path instead. the -A option on the ssh command is what enables agent-forwarding. The -i (identity_file) command is what allows you to include the path to your private key.

UPDATE: It appears that if you do the ssh-agent and ssh-add commands then you will not need the -i ~/.ssh/id_rsa part of the command.

Bash Dates

Dates can be very simple in bash. the date command will output the date.

$ date
Wed Jul 19 13:45:03 UTC 2017

To format the date use the following:

%m for month
%d for day
%Y for year
%H for hours
%M for minutes
%S for seconds

You precede any formatting with a plus sign and can use the above symbols to format the date any way you like.

$ date +%m-%d-%Y
07-19-2017
$date +%m/%d/%Y
07/19/2017

If you are going to have spaces then you need to put it in quotes

$ date "+%m-%d-%Y %H:%M:%S"
07-19-2017 08:48:24

Put the date in a specific time zone with TZ.

TZ="America/Chicago" date

This has been a quick-n-dirty guide to dates in bash