When deploying applications in the cloud, you don’t need or want your hosts exposed to the world over SSH. Malicious scanners scan the entire network for easy SSH access, and when they find an open port, they will often try brute-force attacks, which can overload the machine. It’s better to have a single, secured host that doesn’t run any services itself but acts as a proxy or gateway to access your infrastructure.

This is called a bastion hostexternal link .

Ansible is quite easy to integrate with a bastion host configuration1. We will need a custom ansible.cfg and ssh_config file. Let’s start with ssh_config:

SSH Configuration
Host bastion
  Hostname ip.xxx.xxx.xxx.xxx.or.host.name
  User ubuntu
  IdentityFile ~/.ssh/id_rsa
  PasswordAuthentication no
  ForwardAgent yes
  ServerAliveInterval 60
  TCPKeepAlive yes
  ControlMaster auto
  ControlPath ~/.ssh/ansible-%r@%h:%p
  ControlPersist 15m
  ProxyCommand none
  LogLevel QUIET

Host *
  User ubuntu
  IdentityFile ~/.ssh/id_rsa
  ServerAliveInterval 60
  TCPKeepAlive yes
  ProxyCommand ssh -q -A ubuntu@bastion nc %h %p
  LogLevel QUIET
  StrictHostKeyChecking no

I will now describe what the most important options mean.

For the bastion host:

  • User - I’m using an Ubuntu image kickstarted on a cloud provider as the bastion host with its default user. Never use root here. It’s not necessary.
  • ForwardAgent yes - We want to forward our SSH keys through the bastion to destination hosts.
  • ServerAliveInterval 60 - This acts as a keepalive for the connection. SSH will send small ping/pong packets every 60 seconds so your connection won’t hang or terminate after a long time.
  • ControlMaster auto - We will open one connection to the bastion host and multiplex2 other SSH connections through it. The connection will remain open for the duration specified in ControlPersist.
  • ControlPath - This has to be configured the same way as in ansible.cfg.
  • ProxyCommand none - We are setting ProxyCommand for all hosts, but we need to disable it for the bastion itself.

For the default host (*) configuration:

  • ProxyCommand ssh -q -A ubuntu@bastion nc %h %p - This is what makes the magic happen. It pipes your SSH connection through the bastion to the destination host.
  • StrictHostKeyChecking no - This option shouldn’t be enabled for production, but it’s useful at the beginning when you are creating and destroying machines multiple times while testing. Normally, this would cause notifications about SSH key changes, but you’re already aware of that since you just recreated those machines.

I’ve found examples that don’t use netcat, but I was unable to get them working. This one worked for me really well.

To test if your connections work correctly, use this configuration like so:

~/2016/04/use-bastion-host-with-ansible/
ssh -F ssh_config bastion
ssh -F ssh_config other.host.behind.bastion

And now for ansible.cfg:

ansible.cfg
[defaults]
forks=20

[ssh_connection]
ssh_args = -F ./ssh_config -o ControlMaster=auto -o ControlPersist=5m -o LogLevel=QUIET
control_path = ~/.ssh/ansible-%%r@%%h:%%p
pipelining=True

The most important section here is ssh_args, where we point to the ssh_config file in the current directory with the -F option. I also had to re-enter the multiplexing configuration here because it wasn’t working with only the SSH configuration. The control_path option has to use the same path as ssh_config (the % signs are escaped with %%).

You should now be able to run ansible and ansible-playbook commands normally-all traffic will be forwarded through the bastion.

This is a good time to install fail2ban on the bastion and maybe reconfigure it to run ssh on a non-standard, high port 😄