When reading up on how to secure your public cloud environment, you may have come across references to a bastion/jump host/server, or a combination of those terms. Jump Hosts, which I like to refer to them, are servers placed between the open internet and your internal network. A jump host’s only purpose is to act as a secure gateway which grants authorized guests to access your internal services. Jump hosts should be hardened, minimal, actively patched, and heavily monitored. The term Bastion Host includes many more types of hardened servers that may reside in a demilitarized zone (DMZ), perimeter network or be external of your firewall. They could host a service such as web, e-mail, FTP, DNS, etc. The focus of this article is on the type of bastion host whose only function is to allow creating Secure Shell (SSH) tunnels to internal services through an established SSH connection. So it is a server where you can jump into your environment, aka a Jump Host.
Think of a Jump Host as a secure doorway into what is going on behind the scene, the nuts and bolts of your company. They allow you to centralize control over who can get into your environment. When creating this doorway, you want to make sure only authorized users are allowed through and prevent anyone from making modifications to the doorway. For a small organization this isn’t a lot of work to maintain, but once you need to manage access for more then a couple of users, the management can become a burden and your security can falter.
The Jump Host is a doorway which you can use to create secure tunnels through your perimeter to your internal servers without exposing them to the internet. Nothing of value should ever be stored or installed on this server. There is no need to install software tools or a graphical interface on a Jump Host. The less software and services they have, the better they are. All user accounts should have no rights. You want to limit the need to ever login with a privileged user account. Using a properly preconfigured cloud deployment with automated user account management, you should never need to login with a privileged user account. This will greatly limit the attack surface.
Some may use a Windows operating system (OS) for their Jump Host and use the Remote Desktop Protocol (RDP) for connecting to them. Unfortunately, RDP only supports password-based authentication, which is asking for trouble in a globally connected world. Linux servers using OpenSSH support passphrase protected pre-shared key authentication along with Time-Based One-Time Password (TOTP). Having multiple factors of authentication (MFA) is an option on a Linux Jump Host.
With a Windows Server, you are getting a graphical user interface (GUI) along with several many services that are not required. To achieve a performant RDP session, you will need to increase the virtual machine (VM) or instance size requirements. You are also limited to the number of users that can connect through the Windows Jump Host at one time.
There may soon be a viable solution using a Windows OS with the upcoming release of Windows Server 2019 supporting OpenSSH Server as a feature. If it ends up being possible to run the OpenSSH service on a Nano deployment of Windows Server 2019, then we would have a proper stripped down, GUI free Jump Host only running the SSH server that can service multiple users.
For now, save Windows for the management hosts that will sit behind the Jump Host.
These are the goals you want to strive for with a Jump Host:
Encrypting the data at rest protects your data in the event someone, who you did not authorize, gains access to your VM instance or a copy of your disks. That someone could be at the cloud provider, someone who gained access to your data through an exploit, or an unauthorized employee.
There are two ways of encrypting your VM instances.
This is where the disks are encrypted at rest and will be decrypted on VM instance start up. The OS remains untouched. Anyone with access to the managed key and a backup can access your data.
This will require configuration within the OS and/or bootloader. If someone has access to export a backup of the VM instance, they would still need the passphrase to decrypt the data. With OS level encryption, you will need to find a way of entering the passphrase on start up.
There are two types of managed keys.
With cloud provider keys, you are not able to control which services are able to use the keys and you are not able to export these keys. Generally, encryption/decryption is transparent and there is no need to export the keys. The one scenario where cloud managed keys don’t work, is with AWS. You will not be able to move EC2/and RDS instances snapshots or AMIs that have been encrypted using an Amazon managed key to another account. If your backup or disaster recovery (DR) strategy involves sending data to a different account, you would need to use customer managed keys. The good thing about cloud provider managed keys is that they have no additional cost.
Customer managed keys are stored in the same key management service as the cloud provider keys, but you have more control over them. They can be generated separate from the cloud provider, can control who can access them, and you have better visibility over what is using them. There will be an additional cost to use these keys.
A Jump Host would not contain any sensitive client data. If you have setup a template or script based deployment, there is no need to perform and retain backups. As long as this is the case, VM instance level encryption using cloud provider keys is adequate.
All instances deployed from marketplace AMIs are not encrypted. You will need to perform a few extra steps to create an encrypted AMI image for deploying instances from. This is an image level encryption and doesn’t require any changes to the OS.
The Instance Store (temporary) disk is not encrypted at rest except on the newer instances that support Non-Volatile Memory express (NVMe). Caching of sensitive data on this drive is not recommended.
There is no turn-key OS level full disk encryption. You would need to configure that yourself.
Managed disks are transparently encrypted by default with Storage Side Encryption (SSE). All backups and images are also encrypted. There is no option to use a customer managed key.
Azure Disk Encryption (ADE) is a OS level full disk encryption that you can add for additional protection. It is applied by installing an extension and the encryption key is stored in a KeyVault. Only Ubuntu and newer versions of CentOS support ADE OS disk encryption. You are not able to use ADE on burstable B series VMs unless you are using premium disks.
I would assume the temporary disk is not encrypted at rest, but I haven’t found a conclusive answer to this question. Caching of sensitive data on this drive is not recommended.
You can encrypt VM instances on deployment from any available image using Google managed or customer managed keys.
The Local SSD (temporary) disks are transparently encrypted with a GCP managed key. Using customer managed keys are not supported.
There is no turn-key OS level full disk encryption. You would need to configure that yourself.
![Oracle Cloud](/images/posts/oracle cloud.jpeg)
Ha, just kidding. You have my apologies if you were trying to support Larry’s yacht racing team.
You can acquire a hardening script for your OS type or you can deploy a CIS Benchmark image that is preconfigured for you. The CIS Benchmark images are available on all three clouds (AWS, Azure, and GCP). https://www.cisecurity.org/hardened-images/
Other than cloud specific log exporting and management services, no additional software is needed.
You could configure your VM instance to automatically check and apply patches on their own. This method doesn’t give you any visibility on the process to ensure nothing is failing. You can use a cloud provider automated patching service that will give you visibility and control over the process.
Automated patching is performed by Systems Manager. You will need to attach an IAM role that grants access to Systems Manger (SSM), install the SSM agent, and then configure a maintenance window for scheduled patching.
Azure’s patching service is called Update Management. You will need to setup an Automation Account and register the VMs with the update management service. You will then create a Scheduled Deployment to define when patching will take place. This service makes use of the Microsoft Monitoring Agent extension.
GCP does not have a native operating system patching service. You would need to use third-party management software or self manage automated patching.
By default, when you are using an SSH connection the connection is secured using the SSH-2 protocol. The SSH tunnel while passing through the SSH connection is protected. When it leaves the Jump Host and proceeds on to the destination VM instances, it is left to its own devices. To ensure all traffic is encrypted in transit, you will need to use an encrypted connection within the SSH tunnel. This means telnet, ftp, and unencrypted database connections should not be used.
The OpenSSH server on Linux offers a lot of flexibility when it comes to how a user can authenticate when connecting to its daemon (service). In its configuration file, /etc/ssh/sshd_config, you can define what authentication methods are allowed for all users or individual users. You can also can restrict who can connect.
If you want to take things a step further, the Linux operating system offers a Pluggable Authentication Modules (PAM) framework which allows a lot of flexibility in configuring how it authenticates users. OpenSSH server makes use of PAM for authentication. If you want to enable Time-based One-time Password (TOTP) for an extra factor of authentication, you would do so by modifying the PAM configuration files.
By default, connection attempts are logged to /var/log/. Which file they are logged to will depend on the Linux distribution you choose. Some will go a step further and log all commands entered in through the SSH connection.
To ensure there is a paper trail that can’t be erased by a malicious attacker, you will want to send the logs outside of the Jump Host.
The logging service in AWS is called CloudWatch Logs. To export the logs to this service, you would install the awslogs agent on the instance. You will then need to configure the agent to export the log files.
The logging service in Azure is called Log Analytics. It was formerly called Operations Management Suite (OMS). You would deploy the Azure Diagnostics Agent extension and configure it to send your logs to Log Analytics.
The logging service in GCP is called Stackdriver. To export the logs to this service, you would install the Stackdriver Logging agent (Fluentd). The agent comes with several application specific configurations that will cover most additional installed software. With a Jump Host, there will only be the default OS logs and possibly an audit log. If you are using an OS that have SELinux installed (RedHat based distributions), you will want to modify the default configuration to include the SELinux audit log (/var/log/audit/audit.log).
The sole purpose of the Jump Host is to control the passage into your environment. There is no need for additional administrators or to run anything other then the SSH client tools. With no administrative access to install anything, there is a much lower risk of a man-in-the-middle attack and a reduced attack surface.
If we can automate the management of the users, there is no need to ever log in as an administrator. In the event there is a need to modify the Jump Host, you can use the administrator account created when deploying the VM instance. You will want to have the private key for this account locked away in a virtual key/password vault and never use it for any other VM instances.
With AWS you can’t create additional user accounts and you can’t change the key pair after an instance has been deployed.
With the added ability for instances to assume IAM roles and the ability to store public SSH keys under each IAM user, it is possible to automate user management at the OS level.
The build-it-yourself method to user automation is to create a script that runs on the VM instance to manage the local user accounts based on IAM users. It will use IAM Roles to get a list of users and their public keys from IAM. The script will then modify the local user accounts to match the list of IAM users. Because this is a script, you will need to schedule the script to run on a re-occurring basis or have it started using the Systems Manager Run Command.
You can use the Reset Password feature to create an additional account with an SSH key or add a new SSH key to an existing account, but you can’t remove accounts using the portal. The Reset Password feature makes use of an extension called VMAccessForLinux. Once the extension is installed, you can use Azure CLI to remove user accounts. Every user account created by this extension will be granted access to elevate their permissions to root using sudo. This will not qualify for Jump Host guest user account management.
Alternatively, you can manage your local accounts using Azure Active Directory with the AADLoginForLinux extension. After the extension is deployed, you would assign the Virtual Machine User Login role to the Azure Active Directory (AAD) user with the scope of the VM. It requires you to authenticate your AAD account through a web browser and enter in a one-time code displayed when attempting to log in. https://docs.microsoft.com/en-us/azure/virtual-machines/linux/login-using-aad
The old way of managing automated user account creation on Google Cloud (GCP) was to use the SSH keys feature. This is now seen as a security risk as every account can elevate themselves to root and anyone with edit access to the VM instance can create an account instantly. Accounts do not get disabled when the SSH keys are removed. This will not qualify for Jump Host guest user account management and GCP recommends against its use.
The new way to manage user accounts is through osLogin. You can add regular user accounts to VM instances by adding a role to their IAM account. If you need an administrator account, there is a separate role for them. This means all local user accounts will need to tie back to a Google account. The other drawback to this method is that user accounts will be created on all VM instances in the project. This means you will need to contain the Jump Host in its own project. SSH key management for the osLogin method is done through the cli using the gcloud command. https://cloud.google.com/compute/docs/instances/managing-instance-access
An SSH tunnel is a port forwarding tunnel that links a local port on your computer to the remote port of a VM instance. The VM instance would be behind the Jump Host you are connected to and inaccessible from the outside world. When you access the local port, it is like you are accessing the port on the remote VM instance. The tunnel is encased within the secure SSH connection to the Jump Host. Once it leaves the Jump Host, the connection is unprotected by the original SSH connection. Therefore, you should encrypt your traffic even within an SSH tunnel.
![Jump Host Secure Tunnel Access Diagram](/images/posts/Jump Host Secure Tunnels Diagram-wt.png) A diagram of an AWS environment using a Jump Host.
Example:
This is a very high-level description of what happens and how you go about setting it up your connection to the inside. There are a few different ways to set and use SSH tunnels. Each SSH client is different, there are multiple ways to establish a tunnel, and some management tools have built-in support for creating their own transparent tunnel when connecting. I will go into details on how to use the Jump Host and SSH tunnels in another article.
In the next few articles, I plan to go into detail on how to automate the user management for each cloud provider. I will also run through how to simplify connecting through the Jump Host using the most common SSH clients and other software tools with built-in SSH tunnel support.
 
James started out as a web developer with an interest in hardware and open sourced software development. He made the switch to IT infrastructure and spent many years with server virtualization, networking, storage, and domain management.
After exhausting all challenges and learning opportunities provided by traditional IT infrastructure and a desire to fully utilize his developer background, he made the switch to cloud computing.
For the last 3 years he has been dedicated to automating and providing secure cloud solutions on AWS and Azure for our clients.