Posted in Programming
SSH is one of the most widely used protocols for connecting to remote shells. While there are numerous SSH clients the most-used still remains OpenSSH's ssh. OpenSSH has been the default ssh client for every major Linux operation, and is trusted by cloud computing providers such as Amazon's EC2 services and web hosting companies like MediaTemple. There is a plethora of tips and tricks that can be used to make your experience even better than it already is. Read on to discover some of the best tweaks to your favorite SSH client.
A keep-alive is a small piece of data transmitted between a client and a server to ensure that the connection is still open or to keep the connection open. Many protocols implement this as a way of cleaning up dead connections to the server. If a client does not respond, the connection is closed.
SSH does not enable this by default. There are pros and cons to this. A major pro is that under a lot of conditions if you disconnect from the Internet, your connection will be usable when you reconnect. For those who drop out of WiFi a lot, this is a major plus when you discover you don't need to login again.
For those who get the following message from their SSH client when they stop typing for a few minutes it's not as convenient:
symkat@symkat:~$ Read from remote host symkat.com: Connection reset by peer Connection to symkat.com closed.
This happens because your router or firewall is trying to clean up dead connections. It's seeing that no data has been transmitted in N seconds and falsely assumes that the connection is no longer in use.
To rectify this you can add a Keep-Alive. This will ensure that your connection stays open to the server and the firewall doesn't close it.
To make all connections from your shell send a keepalive add the following to your ~/.ssh/config file:
KeepAlive yes ServerAliveInterval 60
The con is that if your connection drops and a KeepAlive packet is sent SSH will disconnect you. If that becomes a problem, you can always actually fix the Internet connection.
Do you make a lot of connections to the same servers? You may not have noticed how slow an initial connection to a shell is. If you multiplex your connection you will definitely notice it though. Let's test the difference between a multiplexed connection using SSH keys and a non-multiplexed connection using SSH keys:
# Without multiplexing enabled: $ time ssh firstname.lastname@example.org uptime 20:47:42 up 16 days, 1:13, 3 users, load average: 0.00, 0.01, 0.00 real 0m1.215s user 0m0.031s sys 0m0.008s # With multiplexing enabled: $ time ssh email@example.com uptime 20:48:43 up 16 days, 1:14, 4 users, load average: 0.00, 0.00, 0.00 real 0m0.174s user 0m0.003s sys 0m0.004s
We can see that multiplexing the connection is much faster, in this instance on an order of 7 times faster than not multiplexing the connection. Multiplexing allows us to have a âcontrolâ connection, which is your initial connection to a server, this is then turned into a UNIX socket file on your computer. All subsequent connections will use that socket to connect to the remote host. This allows us to save time by not requiring all the initial encryption, key exchanges, and negotiations for subsequent connections to the server.
To enable multiplexing do the following:
In a shell:
$ mkdir -p ~/.ssh/connections $ chmod 700 ~/.ssh/connections
Add this to your ~/.ssh/config file:
Host * ControlMaster auto ControlPath ~/.ssh/connections/%r_%h_%p
A negative to this is that some uses of ssh may fail to work with your multiplexed connection. Most notably commands which use tunneling like git, svn or rsync, or forwarding a port. For these you can add the option
-oControlMaster=no. To prevent a specific host from using a multiplexed connection add the following to your
Host YOUR_SERVER_OR_IP MasterControl no
There are security precautions that one should take with this approach. Let's take a look at what actually happens when we connect a second connection:
$ ssh -v -i /dev/null firstname.lastname@example.org OpenSSH_4.7p1, OpenSSL 0.9.7l 28 Sep 2006 debug1: Reading configuration data /Users/symkat/.ssh/config debug1: Reading configuration data /etc/ssh_config debug1: Applying options for * debug1: auto-mux: Trying existing master Last login: symkat@symkat:~$ exit
As we see no actual authentication took place. This poses a significant security risk if running it from a host that is not trusted, as a user who can read and write to the socket can easily make the connection without having to supply a password. Take the same care to secure the sockets as you take in protecting a private key.
Even Starbucks now has free WiFi in its stores. It seems the world has caught on to giving free Internet at most retail locations. The downside is that more teenagers with "Got Root?" stickers are camping out at these locations running the latest version of wireshark.
SSH's encryption can stand up to most any hostile network, but what about web traffic?
Most web browsers, and certainly all the popular ones, support using a proxy to tunnel your traffic. SSH can provide a SOCKS proxy on localhost that tunnels to your remote server with the
-D option. You get all the encryption of SSH for your web traffic, and can rest assured no one will be capturing your login credentials to all those non-ssl websites you're using.
$ ssh -D1080 -oControlMaster=no email@example.com symkat@symkat:~$
Now there is a proxy running on 127.0.0.1:1080 that can be used in a web browser or email client. Any application that supports SOCKS 4 or 5 proxies can use 127.0.0.1:1080 to tunnel its traffic.
$ nc -vvv 127.0.0.1 1080 Connection to 127.0.0.1 1080 port [tcp/socks] succeeded!
Often times you may want only a single piece of information from a remote host. "Is the file system full?" "What's the uptime on the server?" "Who is logged in?"
Normally you would need to login, type the command, see the output and then type exit (or Control-D for those in the know.) There is a better way: combine the ssh with the command you want to execute and get your result:
$ ssh firstname.lastname@example.org uptime 18:41:16 up 15 days, 23:07, 0 users, load average: 0.00, 0.00, 0.00
This executed the ssh symkat.com, logged in as symkat, and ran the command
uptime on symkat. If you're not using SSH keys then you'll be presented with a password prompt before the command is executed.
$ ssh email@example.com ps aux | echo $HOSTNAME symkats-macbook-pro.local
This executed the command
ps aux on symkat.com, sent the output to STDOUT, a pipe on my local laptop picked it up to execute
echo $HOSTNAME locally. Although in most situations using auxiliary data processing like
awk will work flawlessly, there are many situations where you need your pipes and file IO redirects to work on the remote system instead of the local system. In that case you would want to wrap the command in single quotes:
$ ssh firstname.lastname@example.org 'ps aux | echo $HOSTNAME' symkat.com
As a basic rule if you're using
| you're going to want to wrap in single quotes.
It is also worth noting that in using this method of executing a command some programs will not work. Notably anything that requires a terminal, such as screen, irssi, less, or a plethora of other interactive or curses based applications. To force a terminal to be allocated you can use the
$ ssh email@example.com screen -r Must be connected to a terminal. $ ssh ât firstname.lastname@example.org screen -r $ This worked!
Pipes are useful. The concept is simple: take the output from one program's STDOUT and feed it to another program's STDIN. OpenSSH can be used as a pipe into a remote system. Let's say that we would like to transfer a directory structure from one machine to another. The directory structure has a lot of files and sub directories.
We could make a tarball of the directory on our own server and scp it over. If the file system this directory is on lacks the space though we may be better off piping the tarballed content to the remote system.
$ ls content/ 1 18 27 36 45 54 63 72 81 90 10 19 28 37 46 55 64 73 82 91 100 2 29 38 47 56 65 74 83 92 11 20 3 39 48 57 66 75 84 93 12 21 30 4 49 58 67 76 85 94 13 22 31 40 5 59 68 77 86 95 14 23 32 41 50 6 69 78 87 96 15 24 33 42 51 60 7 79 88 97 16 25 34 43 52 61 70 8 89 98 17 26 35 44 53 62 71 80 9 99 $ tar -cz content | ssh email@example.com 'tar -xz' $ ssh symcat@symkat symkat@lazygeek:~$ ls content/ 1 14 2 25 30 36 41 47 52 58 63 69 74 8 85 90 96 10 15 20 26 31 37 42 48 53 59 64 7 75 80 86 91 97 100 16 21 27 32 38 43 49 54 6 65 70 76 81 87 92 98 11 17 22 28 33 39 44 5 55 60 66 71 77 82 88 93 99 12 18 23 29 34 4 45 50 56 61 67 72 78 83 89 94 13 19 24 3 35 40 46 51 57 62 68 73 79 84 9 95
What we did in this example was to create a new archive (
-c) and to compress the archive with gzip (
-z). Because we did not use
-f to tell it to output to a file, the compressed archive was send to STDOUT. We then piped STDOUT with
ssh. We used a one-off command in ssh to invoke tar with the extract (
-x) and gzip compressed (
-z) arguments. This read the compressed archive from the originating server and unpacked it into our server. We then logged in to see the listing of files.
Additionally, we can pipe in the other direction as well. Take for example a situation where you with to make a copy of a remote database, into a local database:
symkat@chard:~$ echo "create database backup" | mysql -uroot -ppassword symkat@chard:~$ ssh firstname.lastname@example.org 'mysqldump -udbuser -ppassword symkat' | \ > mysql -uroot -ppassword backup symkat@chard:~$ echo "use backup;select count(*) from wp_links;" | mysql -uroot -ppassword count(*) 12 symkat@chard:~$
What we did here is to create the database backup on our local machine. Once we had the database created we used a one-off command to get a dump of the database from symkat.com. The SQL Dump came through STDOUT and was piped to another command. We used mysql to access the database, and read STDIN (which is where the data now is after piping it) to create the database on our local machine. We then ran a MySQL command to ensure that there is data in the backup table. As we can see, SSH can provide a true pipe in either direction.
Many people run SSH on an alternate port for one reason or another. For instance, if outgoing port 22 is blocked at your college or place of employment you may have ssh listen on port 443.
Instead of saying ssh -p443 email@example.com you can add a configuration option to your ~/.ssh/config file that is specific to yourserver.com:
Host yourserver.com Port 443
You can extrapolate from this information further that you can make ssh configurations specific to a host. There is little reason to use all those -oOptions when you have a well-written
What is your favorite SSH Tip or Trick?