SSH is a great tool which is used by Linux/Unix sysadmins all over the world.
One neat little thing that it allows you to do is: to connect to ports on a remote computer
through SSH. This process is also called tunneling or creating an SSH tunnel.
And the TCP traffic/communication that happens on this connection is encrypted over SSH.
So, you get security without opening up ports on your remote computers to the public.
Let us take a simple example of accessing a postgresql database which is present on a remote server (on one of the server providers like aws, digital ocean).
You don’t want to open the 5432 (the port on which postgresql runs) port to the internet. As, this will allow everyone to try bruteforce attacks on your database.
So, you deny traffic on 5432. You can access this port from your local computer through an SSH tunnel.
~/.ssh/config syntax for a tunnel is simple:
Hostname myserver.com or the IP 192.168.1.1
# forward our local port 4000 to the localhost:5432 on the remote server which is the postgresql server
LocalForward 4000 127.0.0.1:5432
# forward our local port 5000 to the localhost:6379 on the remote server which is the redis server
LocalForward 5000 127.0.0.1:6379
Let us break it down:
Host myserver: creates an ssh configuration with a name
myserver. Which you can connect to using
Hostname 192.168.1.1: signals ssh to connect to this port when you run
Port 22: you can drop this if your port is the default port 22. However, if you are running ssh on a different port on the remote server, you can change this.
User goodcode: this tells ssh to use the username goodcode when connecting using
LocalForward 4000 127.0.0.1:5432: This is what creates the actual tunnel, Here we are forwarding our local port
4000 to the port
5432 on the remote server.
LocalForward 5000 127.0.0.1:6379: Just to show that you can create multiple tunnels on the same SSH connection, I have also forwarded port 5000 to the redis instance on the remote server
Once we have this setup and open an ssh connection using
ssh myserver. We can connect to postgresql and redis using the following commands
# connect to postgresql on the remote server
psql --host localhost --port 4000 database_name
# connect to the redis instance on the remote server
redis-cli -h localhost -p 5000
You can also pass all these options via the command line without creating an ssh config.
ssh -L 4000:localhost:5432 -L 5000:localhost:6379 --port 22 email@example.com
I had to move one of my large web applications to a different server yesterday. That too across providers (from Digital Ocean to an AWS EC2 instance).
Here are the steps I took, hopefully it helps others in the future:
Install all the libraries needed for the app. Basically, follow the steps I would for a new install. For me this required the installation of
Get the app running with some fake data. This step may require you to copy over the ssl certs from your previous server.
Create an entry in your
/etc/hosts (on your local computer) to point to your new web server, e.g.
Open your app and test it out. At this point I found that I had forgotten to move over the
.env file which had the secrets and keys needed
for the web application. So, I moved them and got the application working.
Add your new server’s public key to your old server’s
~/.ssh/authorized_keys. This is to allow us to move data directly to the new server from the old server
Import your database over ssh from the remote server. My app uses a postgresql server. So, I had to run the following:
ssh firstname.lastname@example.org “sudo -u postgres pg_dump -Fc —no-acl —no-owner simpleform_production | gzip” | gzip -d | sudo -u simpleform pg_restore —verbose —clean —no-acl —no-owner -d simpleform_production
Test your app with the new filled out database. At this point I realized I had to move over files that were uploaded into the old app. So, I scped them over.
You should have stup the systemd scripts or any other init scripts in step 1.
Don’t reload nginx’s configuration after this step. Setup your old server’s nginx config to point to the new server so that It proxies all traffic to your new server. You can do this by adding a
You’ll also have to create an
/etc/hosts entry on your old server so that it points to the new server.
try_files $uri/index.html $uri.html $uri @proxy;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
Now, write a script which can be executed from the new server.
echo stopping simpleform
this is so we can drop the database on the new remote server
sudo systemctl stop simpleform.target
drop the database on the new server
echo dropping db
sudo -u simpleform dropdb simpleform_production
create a fresh database for your new remote server
echo creating db
sudo -u simpleform createdb simpleform_production
stop the application on the old server
echo stopping simpleform
ssh email@example.com “sudo stop simpleform”
import the database from the old server to thew new server
echo importing db
(ssh firstname.lastname@example.org “sudo -u postgres pg_dump -Fc —no-acl —no-owner simpleform_production | gzip” | gzip -d | sudo -u simpleform pg_restore —verbose —clean —no-acl —no-owner -d simpleform_production) || /bin/true
start the application on the new server
echo start local simpleform
sudo systemctl start simpleform.target
echo reloading remote nginx
reload the nginx configuration on the new server
ssh email@example.com sudo nginx -s reload
Change your DNS entries so that they point to the new server’s IP
That is it! Now, your application is up on the new server. The dance which is done in step #10 is required so that you don’t have anyone sending you data which is present on the old server
and not on the new server. So, you will have some downtime which will most probably be less than 5 minutes.