How to create a Cartesian product of two sets

Elixir has a very rich set of functions to work with collections in the Enum module. The name Enum however, is a bit unfortunate. Enum makes me think of (enumerated data)[https://en.wikipedia.org/wiki/Enumerated_type] more than the enumerable.

Anyway, the other day I needed to compute the Cartesian product between to lists in my elixir app. And, I knew there would be something for that in the Enum module. I went through all the documentation and couldn’t find anything useful. That is when it hit me that comprehensions are best suited for this and this is a piece of cake for comprehensions.

So, without further ado, here is the code to get a Cartesian product in elixir:

a = [1, 2, 3, 4]
b = [1, 2, 3, 4]

cp = for x <- a, y <- b, do: {x, y}
IO.inspect(cp)
# =>
# [{1, 1}, {1, 2}, {1, 3}, {2, 1}, {2, 2}, {2, 3}, {3, 1}, {3, 2}, {3, 3}]

How to model an enumerated list of values

When dealing with things like statuses you may need to store and access a fixed set of values. The following is one of way of doing it

defmodule ProductStatus do
  def active, do: :active
  def inactive, do: :inactive
  def cancelled, do: :cancelled
end

If you have a lot of values like these, the above code may become tedious. Metaprogramming to the rescue :)

defmodule ProductStatus do
  @statuses ~w[
    active
    inactive
    cancelled
  ]a

  for event <- @events do
    def unquote(event)(), do: unquote(event)
  end
end

This allows you to use statuses as ProductStatus.active

A simple way to automatically set the semantic version of your Elixir app

There is a neat trick which I bumped into while doing Rails development which I’ve been using to set the semver value of my Elixir Apps.

This works if you use Git for your version control. The basic idea is to use git tags, and the number of commits since the git tag to generate your version number. Elixir allows you to use a version string like below (You can read more about this at https://hexdocs.pm/elixir/Version.html)

[MAJOR].[MINOR].[PATCH]-[pre_release_info]+[build_info]

When I want to bump the major or the minor version, I create a tag with the version information e.g. v1.4 using the command

git tag v1.4 --annotate --message 'Version 1.4'
git push --tags --all

I use the git describe command to get the major, minor and the patch info. A part of the describe output also goes into the build information

git describe
# => v1.4-270-fa78ab71e
# => major.minor-patch-git_commit_id

Putting all of this together, I have the following in my mix config, it also uses the BUILD_NUMBER passed by Jenkins (the build server that we use)

defmodule Dan.Mixfile do
  use Mix.Project

  def project do
    [app: :dan,
     version: app_version(), # call out to another function which generates the version
     # ...
    ]
  end

  # ...

  def app_version do
    # get suffix
    build_number = System.get_env("BUILD_NUMBER")
    suffix = if build_number, do: ".build-#{build_number}", else: build_number # => .build-443

    # get git version
    {git_desc, 0} = System.cmd("git", ~w[describe])
    ["v" <> major_minor, patch, git_commit_id] = git_desc |> String.trim |> String.split("-") # => ["v1.4", "270", "fa78ab71e"]
    "#{major_minor}.#{patch}+ref-#{git_commit_id}#{suffix}" # => 1.4.270+ref-fa78ab71e.build-443
  end

end

Creating such a beautiful version number without showing it anywhere wouldn’t be very useful :) I usually put the version information of the app in a footer and the head inside a meta tag (if it is a phoenix app)

defmodule Dan do

  # cache the app_version during build time
  @app_version Mix.Project.config[:version]
  def app_version, do: @app_version

end

Inside the app.html

<!doctype html>
...
<meta name="version" content="<%= Dan.app_version %>">
...

So, now when something goes wrong I can take a look at the current version of the app by visiting a page and know which precise git commit reproduces the problem. Our QA team too uses this information when filing bug reports.

I also send this version info to my error monitoring and metric services like Rollbar and AppSignal

Hope you find this technique useful :)

How to open your web browser with your app when you start your phoenix app

I have added this line to my .iex.exs to automatically open my web browser when I start my phoenix app

# .iex.exs
spawn(fn -> :os.cmd('xdg-open http://localhost:4000/') end)

Hat tip to: https://github.com/gfvcastro/phoenix_open_browser

the difference between the for comprehension and Enum.each

Look at the code below and try to guess what happens when you run the following lines of code.

for

for {:resp, f} <- [{:resp, 3}, :b, {:resp, 4}] do
  IO.inspect f
end

Enum.each

Enum.each([{:resp, 3}, :b, {:resp, 4}], fn {:resp, f} ->
  IO.inspect f
end)

The for version chugs along just fine :) which might surprise you. The Enum.map version blows up as expected.

Be careful about using for in your code, especially your tests. I had a small test which was the following lines:

assert length(stats) == 2
for {:resp, stat} <- stats do
  assert stat.meta == %{a: 3}
  assert stat.time_ms in 10..20
end

And this was passing every time even when I changed the assert to the code below.

for {:resp, stat} <- stats do
  assert stat.meta == nil
  assert stat.time_ms in 10..20
end

All because I had an incorrect pattern match. I had {:resp, stat} instead of a { {:resp, _id}, stat }. So, the for was filtering out all the stats and the inner block was not being executed even once.

How to do rate limiting of curl using bash and redis

Recently, I had to do rate limiting while consuming an API from one of your providers. I hacked together a simple script to do it using redis. Hope you find it useful

#!/bin/bash

HOURLY_LIMIT=500
while true
do
  # we increment a key which is rounded off to the hour
  if (( $(redis-cli --raw INCR "provider:$(date +%Y%m%d%H)") < $HOURLY_LIMIT ))
  then
    echo "making request"
    curl -s "http://provider.com/url"
  else
    echo "limit reached sleeping"
    sleep 1m
  fi
done

You can tweak the date +%Y%m%d%H expression to date +%Y%m%d%H%M to apply a rate limit per minute.

Using mnesia with distillery

If you are using mnesia with distillery. You may run into an error which likes below:

09:46:26.974 [info]  Application dynamic_store exited: DS.Application.start(:normal, []) returned an error: shutdown: failed to start child: DB
    ** (EXIT) an exception was raised:
        ** (UndefinedFunctionError) function :mnesia.create_schema/1 is undefined (module :mnesia is not available)
            :mnesia.create_schema([:"dynamic_store@127.0.0.1"])

This is because distillery doesn’t export mnesia by default. You need to tell distillery to export :mnesia by adding it to the extra_applications option in your mix application.

  # Run "mix help compile.app" to learn about applications.
  def application do
    [
      extra_applications: [:logger, :mnesia],
      mod: {DS.Application, []}
    ]
  end

Open your text editor with the migration file when you run mix ecto.gen.migration

There is a neat little trick which I found while browsing Ecto’s code Adding the following line to your ~/.bashrc will open up your new migration file with the text editor of your choice

ECTO_EDITOR="code" # put the name of your editor here
export ECTO_EDITOR

Now, when you run mix ecto.gen.migration, it will open up your editor and you can modify your migration.

I actually use neovim to edit my code. However, it doesn’t open up from erlang. I tried running :os.cmd 'nvim /tmp/a' but it fails with an error about stdin not being found.

How to migrate your web application to a different server with minimum downtime

I had to move one of my large web applications to a different server yesterday. That too across providers (from Digital Ocean to an AWS EC2 instance). Here are the steps I took, hopefully it helps others in the future:

  1. Install all the libraries needed for the app. Basically, follow the steps I would for a new install. For me this required the installation of ruby, postgresql, nginx, letsencrypt

  2. Get the app running with some fake data. This step may require you to copy over the ssl certs from your previous server.

  3. Create an entry in your /etc/hosts (on your local computer) to point to your new web server, e.g.

    52.1.210.129 getsimpleform.com

  4. Open your app and test it out. At this point I found that I had forgotten to move over the .env file which had the secrets and keys needed for the web application. So, I moved them and got the application working.

  5. Add your new server’s public key to your old server’s ~/.ssh/authorized_keys. This is to allow us to move data directly to the new server from the old server

  6. Import your database over ssh from the remote server. My app uses a postgresql server. So, I had to run the following:

    ssh ubuntu@getsimpleform.com “sudo -u postgres pg_dump -Fc —no-acl —no-owner simpleform_production | gzip” | gzip -d | sudo -u simpleform pg_restore —verbose —clean —no-acl —no-owner -d simpleform_production

  7. Test your app with the new filled out database. At this point I realized I had to move over files that were uploaded into the old app. So, I scped them over.

  8. You should have stup the systemd scripts or any other init scripts in step 1.

  9. Don’t reload nginx’s configuration after this step. Setup your old server’s nginx config to point to the new server so that It proxies all traffic to your new server. You can do this by adding a proxy_pass You’ll also have to create an /etc/hosts entry on your old server so that it points to the new server.

    try_files $uri/index.html $uri.html $uri @proxy;
    
    location @proxy{
    
      proxy_set_header  X-Real-IP        $remote_addr;
      proxy_set_header  X-Forwarded-For  $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;
      proxy_set_header Host $http_host;
      proxy_redirect off;
    
      proxy_pass https://getsimpleform.com;
    }
  10. Now, write a script which can be executed from the new server.

    #!/bin/bash -e echo stopping simpleform

    this is so we can drop the database on the new remote server

    sudo systemctl stop simpleform.target

    drop the database on the new server

    echo dropping db sudo -u simpleform dropdb simpleform_production

    create a fresh database for your new remote server

    echo creating db sudo -u simpleform createdb simpleform_production

    stop the application on the old server

    echo stopping simpleform ssh ubuntu@getsimpleform.com “sudo stop simpleform”

    import the database from the old server to thew new server

    echo importing db (ssh ubuntu@getsimpleform.com “sudo -u postgres pg_dump -Fc —no-acl —no-owner simpleform_production | gzip” | gzip -d | sudo -u simpleform pg_restore —verbose —clean —no-acl —no-owner -d simpleform_production) || /bin/true

    start the application on the new server

    echo start local simpleform sudo systemctl start simpleform.target echo reloading remote nginx

    reload the nginx configuration on the new server

    ssh ubuntu@getsimpleform.com sudo nginx -s reload

  11. Change your DNS entries so that they point to the new server’s IP

That is it! Now, your application is up on the new server. The dance which is done in step #10 is required so that you don’t have anyone sending you data which is present on the old server and not on the new server. So, you will have some downtime which will most probably be less than 5 minutes.

How to forward your local ports to a remote server using SSH

SSH is a great tool which is used by Linux/Unix sysadmins all over the world. One neat little thing that it allows you to do is: to connect to ports on a remote computer through SSH. This process is also called tunneling or creating an SSH tunnel. And the TCP traffic/communication that happens on this connection is encrypted over SSH. So, you get security without opening up ports on your remote computers to the public.

Let us take a simple example of accessing a postgresql database which is present on a remote server (on one of the server providers like aws, digital ocean). You don’t want to open the 5432 (the port on which postgresql runs) port to the internet. As, this will allow everyone to try bruteforce attacks on your database. So, you deny traffic on 5432. You can access this port from your local computer through an SSH tunnel.

The ~/.ssh/config syntax for a tunnel is simple:

Host myserver
  Hostname myserver.com or the IP 192.168.1.1
  Port 22
  User goodcode
  # forward our local port 4000 to the localhost:5432 on the remote server which is the postgresql server
  LocalForward 4000 127.0.0.1:5432
  # forward our local port 5000 to the localhost:6379 on the remote server which is the redis server
  LocalForward 5000 127.0.0.1:6379

Let us break it down:

  1. Host myserver: creates an ssh configuration with a name myserver. Which you can connect to using ssh myserver
  2. Hostname 192.168.1.1: signals ssh to connect to this port when you run ssh myserver
  3. Port 22: you can drop this if your port is the default port 22. However, if you are running ssh on a different port on the remote server, you can change this.
  4. User goodcode: this tells ssh to use the username goodcode when connecting using ssh myserver
  5. LocalForward 4000 127.0.0.1:5432: This is what creates the actual tunnel, Here we are forwarding our local port 4000 to the port 5432 on the remote server.
  6. LocalForward 5000 127.0.0.1:6379: Just to show that you can create multiple tunnels on the same SSH connection, I have also forwarded port 5000 to the redis instance on the remote server

Once we have this setup and open an ssh connection using ssh myserver. We can connect to postgresql and redis using the following commands

# connect to postgresql on the remote server
psql --host localhost --port 4000 database_name
# connect to the redis instance on the remote server
redis-cli -h localhost -p 5000

You can also pass all these options via the command line without creating an ssh config.

ssh -L 4000:localhost:5432 -L 5000:localhost:6379 --port 22  goodcode@192.168.1.1

When not to use apply for dynamically calling functions in Elixir

Elixir has a nice apply function which allows you to call any module’s function with a list of arguments normally called the mfa. However, I see apply being used in places where it shouldn’t be. Let us take the below example.

defmodule WorkerBehaviour do
  @callback perform(job :: Job.t)
end

defmodule EmailWorker do
  @behaviour WorkerBehaviour
  def perform(job) do
  # ...
  end
end

defmodule ScreenshotWorker do
  @behaviour WorkerBehaviour
  def perform(job) do
  # ...
  end
end

defmodule Processor do
  def process_job(worker_module, job) do
    apply(worker_module, :perform, [job])
  end
end

In this example the Processor.process_job uses the apply function to send the job to the right worker. However, there is a more readable version of this code. Just use the following:

defmodule Processor do
  def process_job(worker_module, job) do
    worker_module.perform(job)
  end
end

Since, in this particular scenario we know beforehand what the name of the function is and the number of arguments is the same for all modules, we can directly invoke the required function on the module using the above syntax. This makes your code more readable overall.

Treat your elixir warnings as errors in your non dev environments

One of the bittersweet things about Go is its relentlessness with warnings. You cannot compile your Go code which has warnings in it. This is great for the long term health of the project. However, it is not so pleasant while writing the code. Elixir takes an opposite approach to this where even errors which should break your build are treated as warnings. One such instance are Behaviours. If you have a module which implements a behaviour and does not implement a non optional callback, all elixir does is emit a warning! It should really throw an error and stop the compilation. However, all hope is not lost! You can use an elixir compiler flag to treat warnings as errors. All you need to do is add the following to your mix.exs:


  def project do
    [app: :awesome_possum,
     #...
     # treat warnings as errors in non dev environments
     elixirc_options: [warnings_as_errors: Mix.env != :dev],
     #...]
  end

You can even hard code it to true and it will always treat warnings as errors. However, non dev is the sweet spot for me, as I may be testing incomplete code with warnings in dev.

You can also pass these options to via the cli like so:


# using elixirc
elixirc --warnings-as-errors awesome_possum.ex
# using mix
mix compile --warnings-as-errors

Utility script to run your bash commands in the background and notify you when they are done

I have a small bash utility script that I use to run my commands in the background. For instance, I do a git push in the background by running b git push

This utility runs the command in the background and notifies me when it is complete. It also tells me if the command succeeded or failed. Here it is:

#!/bin/bash
#Script to run a command in background redirecting the
#STDERR and STDOUT to /tmp/b.log in a background task

echo "$(date +%Y-%m-%d:%H:%M:%S): started running $*" >> /tmp/b.log
cmd="$*"
(/bin/bash -l -c "$cmd" 1>> /tmp/b.log 2>&1; notify-send --urgency=low -i "$([ $? = 0 ] && echo '/home/goodcode/.icons/ruby_green_icon.png' || echo error)" "$cmd")&>/dev/null &

Let us split this command and understand what it does

  1. /bin/bash invokes the bash command, this is useful when you want to execute arbitrary commands using bash after constructing them in a script.
  2. -l uses the login option
  3. -c to pass the actual command
  4. 2>&1 to redirect the standard error stream to the standard out stream
  5. 1>> /tmp/b.log to redirect the standard out (which also has the standard error because of the above) to a file called /tmp/b.log in append mode.
  6. ; separator between commands to run the next command regardless of the success of the first command
  7. notify-send to show a notification on our screen, this is executed once the first command is finished
  8. $([ $? = 0 ] && echo '/home/goodcode/.icons/ruby_green_icon.png' || echo error): $? gives us the exit value of the previous command and if it is equal to 0 it means the previous command was successful, and in this case it uses a green ruby icon, otherwise it uses a red error icon.
  9. &> /dev/null redirects the stdout and stderr of this entire command to /dev/null which is a null device, which discards all that is written to it. So, basically it is as if we are writing in to the void
  10. & adding & at the end backgrounds the command

Changes to your prod.exs before deploying phoenix apps using distillery

While deploying my first phoenix app. I spent 2 hours trying to debug why my phoenix app wasn’t showing up on http://localhost:4000. I usually test it using curl -v http://localhost:4000/ from my server. I had my head scratching for a long time before reading these lines in the prod.exs file

# ## Using releases
#
# If you are doing OTP releases, you need to instruct Phoenix
# to start the server for all endpoints:
#
#     config :phoenix, :serve_endpoints, true
#config :phoenix, :serve_endpoints, true

As it says, uncomment the :serve_endpoints line if you are using distillery for deployments and save yourself some frustration :)

How to redirect to back or the root path similar to rails in a phoenix app

While writing a phoenix app you may want to redirect a user back to where they came from. This is possible because HTTP sends you a refer header which tells you which page/url the user came from. Rails allows you to this using redirect_to :back in versions < 5 and redirect_back(fallback: "/") in versions >= 5. You can do something similar in Phoenix by using the following code snippet.

def redirect_back(conn, fallback \\ "/") do
    case get_req_header(conn, "referer") do
      [referer] -> redirect conn, to: referer
      _         -> redirect conn, to: fallback
    end
  end
end

Which can be used as below.


conn
|> put_flash(:info, "Success!")
|> redirect_back

# or

conn
|> put_flash(:info, "Success!")
|> redirect_back(_fallback = "/dash")

Annotating variables with underscore variables to make code more readable

We should always strive to make our code as readable as possible. Underscore variables aid is in making our code more readable by annotating literal values with meanings.

Look at the following variations

without any underscore variable annotation

Tentacat.Contents.find(@owner, @repo, "", client)

While reading the code, it is difficult to know what the third parameter is. In most these cases, I navigate to the actual function definition and read the variable name.

with underscore annotations

Here is an improved version of the same code with and underscore variable annotation. It makes it crystal clear that the third argument is a path.

Tentacat.Contents.find(@owner, @repo, _path = "", client)

What are the techniques you use in your code to make it more readable?

A good .iex.exs for your phoenix apps

Having a good .iex.exs in your phoenix apps makes it easy to navigate in your iex repl.

Here is what I have in my phoenix apps:

# .iex.exs
alias GC.{
  Repo,
  User,
  Site,
  Entry,
}

import Ecto.Query

alias GC.Client, as: DC
alias GC.Web.Router.Helpers, as: RH

import IExHelpers

# lib/iex_helpers.ex

defmodule IExHelpers do
  alias GC.{Repo, User}
  def get_user, do: Repo.one(User, limit: 1)
  def routes, do: Mix.Task.run "phx.routes"
end

What are the tricks present in your .iex.exs? Please share :)