Decoding in Elm

When you make a call to an API in elm you have to decode the result into types. This can be hard to wrap your head around at first and it takes more steps than you would expect (certainly more than the one step it takes in javascript to turn json into usable data: JSON.parse)

When you get a response from an api it will be in the form of a (Result Http.Error String) type. You’ll have to parse this with a case statement to see if it has errors.

case result of
    Ok jsonBlob ->
        --do something with json blob like decode it. 
   
    Err err -> 
        --handle error

This will only catch errors from the server. status 500 for example, among other errors. If it returns a status 200 and the response is in this form:

{ success: false, error: "something went wrong" }

You’ll have to handle that error yourself in the Ok case.

Let’s say it returns an Person. The json may look something like this.

"{ \"success\": true, \"payload\": { \"id\": 1, \"name\": \"bob\", \"age\": 29 } }"

The Json.Decode elm package provides us with a decodeString function.

decodeString : Decoder a -> String -> Result String a

It takes in a Decoder and a String and returns a Result.

The Decoder that you pass in will look like this

playerDecoder : Decoder Player
playerDecoder = 
    at ["payload"]
        (decode Player
            |> required "id" int
            |> required "name" string
            |> required "age" int
        )

It’s important to note that order matters here. You will require the properties in the order they are defined on the Player type. decode, required, int, string, and at are all functions and types that come from the Json.Decode and Json.Decode.Pipeline packages. So the above code assumes you have both of those packages installed and imported and that you have a Player type defined. The at function tells where the object is located. In this case it's in a property called payload. Notice "payload" is in an array. The at function can handle nested data. For example, if the Player was at payload.data then we would have both properties in the array like so.

at ["payload", "data"]

The rest is self-explanatory. We are decoding the string into a Player. We are requiring that the Player has id, name, and age. The json blob may have more properties than this, but this is all we care about. If the Player type has more properties they need to be included. But if you don't need them you can use the optional function instead of the required function. The optional function is just like the required function except that it requires one more argument, a default value in case the optional property doesn't exist. So if it's a list you may have [] as your third argument. A string will usually have "" as the third argument.

So let's use the decoder we made. Remember we made it for the decodeString function. It will be the 2nd argument.

decodeString decodePlayer jsonBlob

There's still more to be done. The above function call will give us a result, not a Player, so we need to parse the Result to get the Player.

case decodeString playerDecoder jsonBlob of
    Ok value -> 
        value
    
    Err err -> 
        --do something with error or return default type

The above code will return the value if the result is okay. In this case, the value will be a Player.

Now let's put it all together. Let's say we want to update our model with the new player.

updateModel : (Result Http.Error String) -> Model -> Model
updateModel result model =
    let 
        playerDecoder = 
            at ["payload"]
                (decode Player
                    |> required "id" int
                    |> required "name" string
                    |> required "age" int
                )
        decodePlayer jsonBlob =
            case decodeString playerDecoder jsonBlob of
                Ok value -> 
                    value
    
                Err err -> 
                    Player 0 "" 0

    in 
        case result of
            Ok jsonBlob ->
                { model | player = decodePlayer jsonBlob }
   
            Err err -> 
                { model | error = decodeError err }

The updateModel function updates the model with either a player or an error. decodeError would be function similar to decodePlayer except that it decodes an error instead of a player. It's not defined in the snippet above, but it would use the same concepts. In decodePlayer the Err means something was wrong with your decoder, so you must have made a mistake in your decodePlayer function. You can do something more sophisticated with the error, but for now we're just returning a default player.

This has been a quick-n-dirty guide to decoding json payloads in Elm.

Advertisements

Graphql request with JSON

GraphiQL makes GraphQL queries very easy, almost deceptively easy. But it’s basically just for testing. What happens when you want to send a real request using JSON. Well this post will show you how and how the json version differs from the GraphiQL version.

Let’s say you have a mutation that you run on GraphiQL.

mutation {
  createRole(name: "arole", isDefault: false) {
    id
  }
}

That will create a role depending on how you defined your mutation. This will create a role with the name “arole” and return it’s id. This is exactly how it looks in GraphiQL. Let’s look at the json version as it would appear in postman.

{
  "query": "mutation { createRole(name: \"arole\", isDefault: false) { id }}"
}

In JSON, you basically put the entire GraphiQL version of the query into a “query” property in string form. You cannot use single ticks (‘) for strings. You must use double quotes and escape them (at least in postman).

This has been a quick-n-dirty guide to doing graphql requests with json.

Going Down the Rabbit Hole…

I was working on a project in which I inherited a project that involved terraform. In the project, modules were used. The one other time I’ve used Terraform, I used resources. Modules seemed similar, but I hadn’t used them before so I wanted to know how they differed. I found an article online about it that was a part in a larger series. The previous part in the series was about why they chose terraform instead of Chef or ansible among others. Both of those tools are also used in my circle so I was curious on their take. So I started to read that article when Packer was brought up. I had never heard of Packer so I googled it. While reading about it, I found it was very similar to Docker. So I wondered how’d they’d differ and which one was better so I googled “docker vs packer” so now I’m 4 topics removed from my original task and reading about docker vs packer when I first set out to learn about terraform modules.

I think it’s good to go beyond the bare minimum one must learn to accomplish a task, but where do we draw the line? How far down the Rabbit hole do we go? We have to cut ourselves off at some point or we’d be doing random research for hours. I don’t know if I have the right answer, but in this instance, I turned back from the docker vs packer research and went back to reading why terraform was a good choice over the others, which is also a bunny trail, but one that is still related to my current task.

So when you go on your bunny trails (which is okay. That’s how we learn), just ask yourself as you go deeper “Is this related to my current task”? If not then stop there and recursively finish your research to get back to your current task. Of course this is only when you are on someone else’s time (like a boss or client). If this is your own time then study away and soak up the random knowledge.

Solving the ../../../../lib/myService.js Problem in Node

If you’ve ever required a service in a large project you are familiar with this problem. The following solution uses absolute paths instead of relative paths to solve this problem.

In your app.js (or index.js or server.js or main.js or whatever you call your main js file in the root directory of your project) add the following function.

global.include = file => require(__dirname+'/'+file);

Now instead of figuring out how many directories you are from your lib folder and typing this out

const myService = require('../../../lib/myService');

You can simply do the following

const myService = include('lib/myService');

Shout out to amoniker for writing the article that described this solution. https://coderwall.com/p/th6ssq/absolute-paths-require

Signing into AWS with aws-cli and multiple accounts

aws-cli allows you to interact with AWS from your command line. First thing you have to do is login. But if you have multiple AWS accounts you want to make sure you are signing into the right one. You should have two files in your .aws directory in your home folder. You should have credentials and config. credentials should look something like this.

[default]
aws_access_key_id = C7E...
aws_secret_access_key = YTgn....

But if you have multiple accounts then it should look more like this.

[default]
aws_access_key_id = C7E...
aws_secret_access_key = YTgn....
[account1]
aws_access_key_id = 4FD...
aws_secret_access_key = 02e....
[account2]
aws_access_key_id = 5E8...
aws_secret_access_key = ab0c....

You're ~/.aws/config file is generated with aws configure and should look something like this.

[default]
region = us-east-1
output = json
[account1]
region = us-east-1
output = json
[account2]
region = us-east-2
output = json

So now when you login to aws you can specify a profile.

$(aws ecr get-login --no-include-email --region us-east-1 --profile account2)

You'd think if your region was in your config you wouldn't need it. This wasn't the case for me. Also --no-include-email seems to be necessary as well, although on a server once, I was able to simply use $(aws ecr get-login) and it worked. See the image below on what worked for me and what didn't.
Screen Shot 2017-08-08 at 3.07.19 PM

Changing your Ruby version

Sometimes you’ll need different versions of Ruby for different projects. chruby allows you to have multiple versions at once and choose which one to use at any given time. This tutorial will assume you are on a mac (sorry Linux and Windows users). First install ruby-install and chruby.

brew install chruby --HEAD
brew install ruby-install --HEAD

Then install the needed version of Ruby. Let’s assume you need 2.3.1.

ruby-install ruby 2.3.1

Now change to that version of Ruby. If you use the command chruby you won’t see the new version in the list of versions you have until you restart your shell. So let’s restart your shell to refresh your list of Ruby versions displayed by chruby.

exec bash -l

Now if you use chruby you will see the version you just downloaded. So now you can switch to that version.

chruby 2.3.1

But your gems will still be on the old version. Let’s get the gems for the new version.

gem install bundler

Now let’s install the gems for 2.3.1.

bundle

And that’s it. You should be all set with your new version of Ruby. If you want to change back to another version of Ruby that you have installed you should not have to repeat the bundle commands.

Ways to handle terraform variables

There are 3 ways you can use terraform commands. It works the same whether you use destroy, plan, or apply. We’ll just use terraform apply in this post.

Each command requires variables. So when we say “3 ways to run these commands” we really mean 3 ways to handle the variables.

Way One: prompt

$ terraform apply

This command does not pass any variables so you will be prompted for things like you access key and secret key and stuff like that. You can simply answer the prompts and the command will run.

Way Two: arguments

$ terraform apply -var "access_key=ADfakdjafioauvuasvjekjfjd"

If you have several variables to pass then you can use a backslash to go to a new line.

$ terraform apply \
-var "access_key=DAJFKEJfkajdfiadlajkjf823" \
-var "key_name=my_key" \
-var "last_var=foobar"

You can use this method to override variables (if you have them defined elsewhere)

Way Three: terraform.tfvars

$ vim terraform.tfvars

access_key = "DAJFKEJfkajdfiadlajkjf823" 
key_name = "my_key" 
last_var = "foobar"

Then you can simply run terraform apply with no arguments and no prompts.

Also keep in mind that you can use environmnt variables for any of these.

$ export DO_PAT={YOUR_PERSONAL_ACCESS_TOKEN}

Then in your command $ terraform apply -var "do_token=${DO_PAT}"

This is just the tip of the iceberg when it comes to variables in Terraform. You can import them from modules and other places, but I won't cover that here...maybe I'll add it later or do another post about it.

Agent forwarding

Agent forwarding is a security measure in which you have to go through a preliminary server in order to get the server that you really need to reach. The Server being protected will not have a public IP address. You must ssh into the preliminary server with that servers public IP and then from there ssh into the protected server using it’s private IP. Let’s see what that looks like.

$ eval ssh-agent
...
$ ssh-add ~/.ssh/id_rsa
Identity added: /Users/user/.ssh/id_rsa (/Users/user/.ssh/id_rsa)
$ ssh -A -i ~/.ssh/id_rsa ubuntu@
...
ubuntu:~$ ssh ubuntu@

~/.ssh/id_rsa is the most common path for private keys, but if you have your private key somewhere else then you need to use that path instead. the -A option on the ssh command is what enables agent-forwarding. The -i (identity_file) command is what allows you to include the path to your private key.

UPDATE: It appears that if you do the ssh-agent and ssh-add commands then you will not need the -i ~/.ssh/id_rsa part of the command.

Bash Dates

Dates can be very simple in bash. the date command will output the date.

$ date
Wed Jul 19 13:45:03 UTC 2017

To format the date use the following:

%m for month
%d for day
%Y for year
%H for hours
%M for minutes
%S for seconds

You precede any formatting with a plus sign and can use the above symbols to format the date any way you like.

$ date +%m-%d-%Y
07-19-2017
$date +%m/%d/%Y
07/19/2017

If you are going to have spaces then you need to put it in quotes

$ date "+%m-%d-%Y %H:%M:%S"
07-19-2017 08:48:24

Put the date in a specific time zone with TZ.

TZ="America/Chicago" date

This has been a quick-n-dirty guide to dates in bash

Intro to Bash Scripting

Knowing Bash is a very useful because it’s available on any Unix system. And when you are dealing with servers you have to use it a lot. Bash scripts can be as simple as a list of commands or be complicated with lots of functions and logic. Anything you can do in a terminal you can do in a Bash script and vice versa. This will be a very basic intro. It will show how to declare and use variables, make and use functions, and use if-else statements. But we’re gonna start with the hello world bash script.

Hello World

You can make a script and run it with one command if you want.

$ echo "echo hello world" > helloworld && sh helloworld
hello world

Above we are putting the output of echo "echo hello world" into a file called helloworld, and then (if that is successful) we run the sh program with our file as an argument. Now this is the laziest way to do it. We should at least call our file helloworld.sh so we know it’s a bash script. But better practice would be also including the interpreter at the top and making the script executable. Then we can run it by itself.

$ vim helloworld.sh

#!/bin/bash

echo hello world

$ chmod +x helloworld.sh
$ ./helloworld.sh
hello world

running a script with sh will work regardless of permissions. To run the script directly then you will need to make it executable. (I’ve had weird errors with one way and not the other so if that happens just use whichever way works)

Variables

Variable declarations are pretty simple. Type the name of the variable then the equal sign then what the variable will be. When you use the variable you precede it with a dollar sign.

foo="bar"
echo $foo

The biggest mistake people make when declaring variables is that they put space around the equal sign. There must be no space.

Functions

foo() {
  echo "bar"
}

The code of above declares a function. Even though it looks like a function you’d see in other languages like javascript it doesn’t behave that way. It has parenthesis, but no parameters will go there, and you don’t call a function with parenthesis either. You can call the function simply by typing the name. This is how it would work with arguments.

#foobar.sh

function foo () {
  echo $1
}

foo bar

$ sh foobar.sh
bar

if-else Statements

if [[ 1 == 1 ]]; then
  echo "one equals one!"
else
  echo "somehow one does not equal one"
fi

And alternate way to write an if-else is:

if [[ 1 == 1 ]]
then
  echo "one equals one!"
else
  echo "somehow one does not equal one"
fi

I prefer the former.

Arithmetic

Any variable that involves arithmetic must be preceded with let.
Screen Shot 2017-07-20 at 3.37.02 PM
That’s it for now. I’ll do another post on dates in Bash later.