Choose one of the buttons above to see an example on how to authenticate from one cloud to another.
Accessing AWS resources from Azure
The ability to setup this trust is simple to config, but not well known. Any resource you can attach a managed identity to, will work as the source workload. The following will show you how to attach a system-assigned managed identity to an existing virtual machine. The same can be done for a function app, web app, container, or whatever else that supports managed identities and can be used to access AWS resources.
You could use a user-assigned managed identity, just make sure you store your managed identity resources in a separate protected resource group that most AD users do not have Contributor or higher access to. Anyone with access can generate tokens for the managed identity and impersonate the identity. This is why a system-assigned managed identity is preferred.
On the Azure side
You will need to configure the workload with a managed identity and retrieve its details.
We are going to use PowerShell on the Azure side as it is a bit easier to work with then the Azure CLI command. This means you will need to have PowerShell, the Az module, and the Az.ManagedServiceIdentity sub-module installed before running the following. We are going to use a VM as the example source workload.
$resourceName = "<name of your existing resource>"
$resourceGroup = "<resource group name>"
$location = (Get-AzResourceGroup -Name $resourceGroup).Location
# If the Virtual Machine doesn't already have a managed identity, use the following to add one.
$vm = Get-AzVM -ResourceGroupName $resourceGroup -Name $resourceName
Update-AzVm -ResourceGroupName $resourceGroup -VM $vm -IdentityType SystemAssigned
# Get the application ID and object ID for the managed identity.
$vm = Get-AzVM -ResourceGroupName $resourceGroup -Name $resourceName
Get-AzADServicePrincipal -ObjectId $vm.Identity.PrincipalId
Write down the ApplicationId (client ID) and Id (object ID) returned as we will need them when granting access within AWS.
On the AWS side
Create an IAM Identity Provider
There isn’t a nice way to accurately get the thumbprint for a provider URL, so we will do this step through the web console.
- Login to the AWS console.
- Go to IAM section.
- Click on Identity Providers.
- Click Add provider.
- Choose OpenID Connect as the Provider type.
- Paste in https://sts.windows.net/<tenant ID>/ as the Provider URL. Make sure you put in your Azure tenant ID. This allows you to lock down usage of this identity provider to only originate from the tenant ID.
- Click on Get thumbprint.
- For the Audience, paste in the client ID (application ID) of the managed identity attached to your source Azure resource (workload). If you are using a system managed identity, you will have to get the client ID (application ID) by searching for the managed identity under Azure AD / Enterprise Applications. If you are using a user-assigned managed identity you can find the details under the managed identities section of Azure.
Create an IAM role
You will need to create an IAM role with a trust relationship or assume role policy document that specifically allows the Azure user-assigned managed identity. The “aud” part is the audience that we added to the identity provider. For additional security we check that the subject (sub) matches the object ID of the managed identity. This trust policy will only allow a resource with the Azure managed identity access to assume this role.
We are going to use AWS CLI from a Linux command prompt for this part as that is the most used platform and method for accessing AWS from the command line. You can use Windows AWS CLI or PowerShell in a similar fashion. Creating the document files will be slightly different than the bash here-document (heredoc) syntax below.
Bash
cat << EOF > /tmp/iam-role-trustDoc.json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<AWS account ID>:oidc-provider/sts.windows.net/<Azure tenant ID>/"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"sts.windows.net/<Azure tentant ID>/:aud": "<Managed identity client ID>",
"sts.windows.net/<Azure tentant ID>:sub": "<Managed identity object ID>"
}
}
}
]
}
EOF
aws iam create-role --role-name target-workload --assume-role-policy-document file:///tmp/iam-role-trustDoc.json
cat << EOF > /tmp/iam-role-policyDoc.json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowListingOfBucketsOnly",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets"
],
"Resource": "*"
}
]
}
EOF
aws iam attach-role-policy --role-name target-workload --policy-name S3-bucket-list-only --policy-document file:///tmp/iam-role-policyDoc.json
On the source workload
The following examples demonstrates how to access a target resource in AWS from a source resource in Azure. For production use, you would store the managed identity IDs and the target IAM role in a KeyVault and pull in the values when needed. For these examples you can add them inline. When you are on the VM, you can easily authenticated as the managed identity, it is just difficult to get the client ID of the managed identity without permission to do so. To simplify this, you could grant the managed identity Reader access to the VM resource and then read the managed identity client ID from the VM object.
PowerShell
Additional Requirements
$ClientId = "<User managed identity client ID>"
$RoleArn = "arn:aws:iam::<AWS account ID>:role/<IAM role name>"
# Use managed identity attached to the VM to authenticate with Azure.
Connect-AzAccount -Identity
# Generate a temporary access token.
$Response = Get-AzAccessToken -Resource $ClientId
# Pass that token on to AWS along with the client ID, and the role to assume.
$Creds = Use-STSWebIdentityRole -RoleArn $RoleArn -RoleSessionName $ClientId -WebIdentityToken $Response.Token
Get-S3Bucket -Credential $Creds
Bash
Additional Requirements
clientId="<Managed identity client ID>"
# Use managed identity attached to the VM to authenticate with Azure.
az login --identity
# Generate a temporary access token.
token=$(az account get-access-token --resource $clientId --query "accessToken" --output tsv)
# Pass that token on to AWS along with the client ID, and the role to assume.
session=$(aws sts assume-role-with-web-identity --role-arn arn:aws:iam::<AWS account ID>:role/<IAM role name> --role-session-name $clientId --web-identity-token $token)
# Switch over to use the AWS session.
export AWS_ACCESS_KEY_ID="`echo $session | sed -n 's/.*"AccessKeyId":\s"\([^"]*\)",.*/\1/p'`"
export AWS_SECRET_ACCESS_KEY="`echo $session | sed -n 's/.*"SecretAccessKey":\s"\([^"]*\)",.*/\1/p'`"
export AWS_SESSION_TOKEN="`echo $session | sed -n 's/.*"SessionToken":\s"\([^"]*\)",.*/\1/p'`"
aws s3 ls
Python
Additional Requirements
from azure.identity import DefaultAzureCredential
import boto3
clientId = "<User managed identity client ID>"
roleArn = "arn:aws:iam::<AWS account ID>:role/<IAM role name>"
# Use managed identity attached to the VM to authenticate with Azure.
credential = DefaultAzureCredential(managed_identity_client_id=clientId)
# Generate a temporary access token.
response = credential.get_token(clientId)
# Pass that token on to AWS along with the client ID, and the role to assume.
sts = boto3.client('sts')
response = sts.assume_role_with_web_identity(
RoleArn = roleArn,
RoleSessionName = clientId,
WebIdentityToken = response.token
)
# Switch over to use the AWS session.
session = boto3.Session(
aws_access_key_id = response['Credentials']['AccessKeyId'],
aws_secret_access_key = response['Credentials']['SecretAccessKey'],
aws_session_token = response['Credentials']['SessionToken']
)
s3 = session.client('s3')
s3.list_buckets()
If the cross-cloud authentication worked, you should get a list of S3 buckets from the AWS account from your Azure VM without the need to store credentials that could leak and be used elsewhere.
Accessing AWS resources from GCP
Google has provided a good amount of documentation on how to do this process. If you have a slightly different setup, you might want to have a look at some of their examples.
Configure workload identity federation with AWS or Azure
We are going to use gcloud CLI from a Linux command prompt for this example as that is the most used platform and method for accessing GCP and AWS from the command line. You can use gcloud and AWS CLI commands from Windows in a similar fashion.
On the GCP side
You will need to configure a Compute Engine VM instance with a service account as your source workload. You can start with an existing instance, just make sure its service account has the cloud-platform scope.
Bash
projectId="<GCP project ID>"
serviceAccountName="<Your service account name>"
# Create a VM instance with the service account attached.
gcloud compute instances create example-vm \
--service-account $serviceAccountName@$projectId.iam.gserviceaccount.com \
--scopes cloud-platform
# Get the unique ID for the service account.
gcloud iam service-accounts describe $serviceAccountName@$projectId.iam.gserviceaccount.com
On the AWS side
Create an IAM Identity Provider
There isn’t a simple way to accurately get the thumbprint for the provider URL from the command-line, so we will do this step through the web console instead.
-
Login to the AWS console.
-
Go to IAM section.
-
Click on Identity Providers.
-
Click Add provider.
-
Choose OpenID Connect as the Provider type.
-
Paste in https://accounts.google.com as the Provider URL.
-
Click on Get thumbprint.
-
For the Audience, paste in the unique ID (client ID) of the GCP service account attached to your source workload GCP resource.
-
Create an IAM role
You will need to create an IAM role with a trust relationship (assume role policy document) that specifically allows the GCP service account.
cat << EOF > /tmp/iam-role-trustDoc.json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<AWS account ID>:oidc-provider/accounts.google.com"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"accounts.google.com:aud": "<GCP service account unique ID>"
}
}
}
]
}
EOF
aws iam create-role --role-name target-workload --assume-role-policy-document file:///tmp/iam-role-trustDoc.json
cat << EOF > /tmp/iam-role-policyDoc.json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowListingOfBucketsOnly",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets"
],
"Resource": "*"
}
]
}
EOF
aws iam attach-role-policy --role-name target-workload --policy-name S3-bucket-list-only --policy-document file:///tmp/iam-role-policyDoc.json
On the source workload
The following examples demonstrates how to access a target resource in AWS from a source resource in GCP.
Bash
Additional Requirements
# There isn't a way to get the credentials client ID from the command line. The best way to get it is by decoding the token generated below and pulling it out of the "aud" field. You can decode a JWT token using https://jwt.io/. The client ID should look like ###########.apps.googleusercontent.com.
clientId="<GCP credentials client ID>"
# Generate a temporary access token.
token=$(gcloud auth print-identity-token)
# Pass that token on to AWS along with the client ID, and the role to assume.
session=$(aws sts assume-role-with-web-identity --role-arn arn:aws:iam::562047788754:role/OpenIdAzureTest --role-session-name $clientId --web-identity-token $token)
# Switch over to use the AWS session.
export AWS_ACCESS_KEY_ID="`echo $session | sed -n 's/.*"AccessKeyId":\s"\([^"]*\)",.*/\1/p'`"
export AWS_SECRET_ACCESS_KEY="`echo $session | sed -n 's/.*"SecretAccessKey":\s"\([^"]*\)",.*/\1/p'`"
export AWS_SESSION_TOKEN="`echo $session | sed -n 's/.*"SessionToken":\s"\([^"]*\)",.*/\1/p'`"
aws s3 ls
Python
Additional Requirements
import google.auth
import google.auth.transport.requests
from google.oauth2 import id_token
import boto3
roleArn = "arn:aws:iam::<AWS account ID>:role/<IAM role name>"
uniqueId = "<GCP service account unique ID>"
# Get credentials for service account attached to the instance.
credentials, project_id = google.auth.default()
request = google.auth.transport.requests.Request()
credentials.refresh(request)
# Generate a temporary access token.
token = id_token.fetch_id_token(request, "https://example.com")
# Pass that token on to AWS along with the client ID, and the role to assume.
sts = boto3.client('sts')
response = sts.assume_role_with_web_identity(
RoleArn = roleArn,
RoleSessionName = uniqueId,
WebIdentityToken = token
)
# Switch over to use the AWS session.
session = boto3.Session(
aws_access_key_id = response['Credentials']['AccessKeyId'],
aws_secret_access_key = response['Credentials']['SecretAccessKey'],
aws_session_token = response['Credentials']['SessionToken']
)
s3 = session.client('s3')
s3.list_buckets()
If the cross-cloud authentication worked, you should get a list of S3 buckets from the AWS account from your GCP instance without the need to store credentials that could leak and be used elsewhere.
Accessing Azure resources from AWS
AWS IAM/STS is not an OpenID Connect (OIDC) provider. GCP has built their own integration to allow AWS IAM users/roles to accessing GCP resources. There is another way to get this to work when Azure is the target. It involves using an additional AWS service for the OIDC provider, and that service is Cognito.
Cognito is overkill for what we need, but it is currently the only way to make this work. You will have to take extra precaution with restricting access to Cognito. If a user can generate the token, they can impersonate the workload.
On the AWS side
Creating Cognito Identity Pool
The terms developer provider and developer user are a little confusing for our use as Congito is designed to allow users to authenticate instead of workloads. We will refer to them as login provider and workload identifier. You can set them to whatever value you want to identify the target workload. They just need to match on both sides.
Bash
identityPool="AzureAccessFromAWS"
loginProvider="AwsInstances" # Unique name identifying the group of workloads
region="<AWS region>"
# Create a Cognito identity.
aws cognito-identity create-identity-pool --identity-pool-name $identityPool --no-allow-unauthenticated-identities --developer-provider-name $loginProvider --region $region
Grab the IdentityPoolId from the output as it will be needed for the next few steps.
Create IAM role
You will need to have an existing EC2 instance that you can access remotely using SSH.
We will create an IAM role, instance profile, and attach the instance profile/role to the EC2 instance.
You will need to create an IAM role with a trust relationship (assume role policy document) that specifically allows the EC2 service to assume it.
accountId=$(aws sts get-caller-identity --query "Account" --output text)
region="<AWS region>"
identityPoolId="<Will be in the format of region:GUID>"
instanceId="<Your EC2 instance ID>"
roleName="ec2-access-azure"
$loginProvider = "ec2-instances"
$workloadIdentifier = "<workload identifier>"
cat << EOF > /tmp/iam-role-trustDoc.json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
# Create IAM role and instance profile
aws iam create-role --role-name $roleName --assume-role-policy-document file:///tmp/iam-role-trustDoc.json
aws iam create-instance-profile --instance-profile-name $roleName
aws iam add-role-to-instance-profile --instance-profile-name $roleName --role-name $roleName
cat << EOF > /tmp/iam-role-policyDoc.json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowStsAccess",
"Effect": "Allow",
"Action": [
"sts:GetSessionToken",
"sts:GetFederationToken",
"sts:GetAccessKeyInfo",
"sts:GetCallerIdentity",
"sts:GetServiceBearerToken"
],
"Resource": "*"
},
{
"Sid": "AllowCognitoAccess",
"Effect": "Allow",
"Action": [
"cognito-identity:GetOpenIdTokenForDeveloperIdentity",
"cognito-identity:LookupDeveloperIdentity",
"cognito-identity:MergeDeveloperIdentities",
"cognito-identity:UnlinkDeveloperIdentity"
],
"Resource": "arn:aws:cognito-identity:$region:$accountId:identitypool/$identityPoolId"
}
]
}
EOF
aws iam put-role-policy --role-name $roleName --policy-name auth-access --policy-document file:///tmp/iam-role-policyDoc.json
# Attach IAM instance profile to EC2 instance.
aws ec2 associate-iam-instance-profile --iam-instance-profile Name=$roleName --instance-id $instanceId
aws cognito-identity get-open-id-token-for-developer-identity --identity-pool-id $identityPool --logins "$loginProvider=$workloadIdentifier" --region $region
Grab the IdentityId from the output as it will be the subject for creating the Azure federated identity credentials.
On the Azure side
You have two options for what identity to use on the Azure side. You can use an AD App registration or a user-assigned managed identity. If you use user-assigned managed identities, you will want to create them in a dedicated resource group that only your security team has access to as anyone with Contributor access to a managed identity can generate a token from it.
AD App Registrations do not have this security issue but are a lot more complex for our needs.
Create User-Assigned Managed Identity
PowerShell
$subscription = Get-AzContext | Select Subscription
$identityName = "<Name identifying the source>"
$resourceGroup = "<Resource group name>"
$location = (Get-AzResourceGroup -Name $resourceGroup).Location
$issuer = "https://cognito-identity.amazonaws.com"
$subject = "<Cognito identity ID>"
$audience = "<Congito identity pool ID>"
$workloadIdentifier = "<Workload identifier>"
# Create user managed identity.
$identity = New-AzUserAssignedIdentity -ResourceGroupName $resourceGroup -Name $identityName -Location $location
# Create federated credentials for user managed identity.
New-AzFederatedIdentityCredentials -Name $workloadIdentifier -IdentityName $identity.Name -ResourceGroupName $resourceGroup -Issuer $issuer -Subject $subject -Audience $audience
# Assign the managed identity read access to the resource group.
New-AzRoleAssignment -ObjectId $identity.PrincipalId -RoleDefinitionName Reader -ResourceGroupName $resourceGroup
On the source workload
From the EC2 instance or ECS task run the following.
Bash
Additional Requirements
tenantId="<Azure tenant ID>"
clientId="<User managed identity client ID>"
region="<Congito region>"
identityPool="<Congito identity pool ID>"
loginProvider="ec2-instances"
workloadIdentifier="<Workload identifier>"
# Generate a Cognito identity and federated token.
token=$(aws cognito-identity get-open-id-token-for-developer-identity --identity-pool-id $identityPool --logins "$loginProvider=$workloadIdentifier" --region $region --query "Token" --output text)
# Pass that token on to Azure along with the tenant ID and client ID.
az login --federated-token $token --tenant $tenantId --service-principal -u $clientId
az resource list
If the cross-cloud authentication worked, you should get a list of Azure resources from your AWS instance without the need to store credentials that could leak and be used elsewhere.
Python
Additional Requirements
from azure.core.credentials import TokenCredential, AccessToken
from azure.identity import AzureAuthorityHosts
from azure.mgmt.resource import ResourceManagementClient
from msal import ConfidentialClientApplication
import boto3
import time
region = "<AWS region>"
tenantId = "<Azure tenant ID>"
subscriptionId = "<Azure subscription ID>"
clientId = "<User managed identity client ID>"
issuer = "https://cognito-identity.amazonaws.com"
identityPool = "<Congito identity pool ID>"
loginProvider = "ec2-instances"
workloadIdentifier = "<Workload identifier>"
cognito = boto3.client('cognito-identity', region_name = region)
# Generate a Cognito identity and federated identity token.
identity = cognito.get_open_id_token_for_developer_identity(
IdentityPoolId = identityPool,
Logins = {
loginProvider: workloadIdentifier
}
)
# Create a custom class that acts as a credential.
class AssertionCredential(TokenCredential):
def __init__(self, client_id, tenant_id, id_token):
self.app = ConfidentialClientApplication(
client_id,
client_credential={
'client_assertion': id_token
},
authority=f"https://{AzureAuthorityHosts.AZURE_PUBLIC_CLOUD}/{tenant_id}"
)
def get_token(
self,
*scopes: str
) -> AccessToken:
token = self.app.acquire_token_for_client(scopes)
if 'error' in token:
raise Exception(token['error_description'])
expires_on = time.time() + token['expires_in']
return AccessToken(token['access_token'], int(expires_on))
# Get a credential based on the identity token.
creds = AssertionCredential(
tenant_id = tenantId,
client_id = clientId,
id_token = identity['Token']
)
# Get a list of resources from Azure
resource_client = ResourceManagementClient(creds, subscriptionId)
resources = resource_client.resources.list()
column_width = 36
print("Resource".ljust(column_width) + "Type".ljust(column_width))
print("-" * (column_width * 2))
for resource in list(resources):
print(f"{resource.name:<{column_width}}{resource.type:<{column_width}}")
Accessing Azure resources from GCP
Google has provided a good amount of documentation on how to do this process. If you have a slightly different setup, you might want to have a look at some of their examples.
Configure workload identity federation with AWS or Azure
We are going to use gcloud CLI from a Linux command prompt for this example as that is the most used platform and method for accessing GCP from the command line. You can use gcloud CLI commands from Windows in a similar fashion.
On the GCP side
Here we will create a Compute Engine VM instance with a service account as your source workload. You can start with an existing instance. Just make sure its service account has the cloud-platform scope.
Bash
projectId="<GCP project ID>"
serviceAccountName="<Your service account name>"
# Create a VM instance with the service account attached.
gcloud compute instances create example-vm \
--service-account $serviceAccountName@$projectId.iam.gserviceaccount.com \
--scopes cloud-platform
# Get the unique ID for the service account.
gcloud iam service-accounts describe $serviceAccountName@$projectId.iam.gserviceaccount.com
On the Azure side
You have two options for what identity to use on the Azure side. You can use an AD App registration or a user-assigned managed identity. If you use user-assigned managed identities, you will want to create them in a dedicated resource group that only your security team has access to as anyone with Contributor access to a managed identity can generate a token from it.
AD App Registrations do not have this security issue but are a lot more complex for our needs.
Create an Azure Managed Identity
PowerShell
$subscription = Get-AzContext | Select Subscription
$identityName = "<Name identifying the source>"
$resourceGroup = "<Resource group name>"
$location = (Get-AzResourceGroup -Name $resourceGroup).Location
$issuer = "https://accounts.google.com"
$subject = "<Service account unique/client ID>"
$audience = "<Service account email or instance name>"
$workloadIdentifier = "<Workload identifier>"
# Create user managed identity.
$identity = New-AzUserAssignedIdentity -ResourceGroupName $resourceGroup -Name $identityName -Location $location
# Create federated credentials for user managed identity.
New-AzFederatedIdentityCredentials -Name $workloadIdentifier -IdentityName $identity.Name -ResourceGroupName $resourceGroup -Issuer $issuer -Subject $subject -Audience $audience
# Assign the managed identity read access to the resource group.
New-AzRoleAssignment -ObjectId $identity.PrincipalId -RoleDefinitionName Reader -ResourceGroupName $resourceGroup
On the source workload
From the VM instance run the following.
Bash
Additional Requirements
tenantId="<Azure tenant ID>"
clientId="<User managed identity client ID>"
token=$(gcloud auth print-identity-token)
az login --federated-token $token --tenant $tenantId --service-principal -u $clientId
az resource list
If the cross-cloud authentication worked, you should get a list of Azure resources from your GCP instance without the need to store credentials that could leak and be used elsewhere.
Python
Additional Requirements
from azure.core.credentials import TokenCredential, AccessToken
from azure.identity import AzureAuthorityHosts
from azure.mgmt.resource import ResourceManagementClient
from msal import ConfidentialClientApplication
import google.auth
import google.auth.transport.requests
from google.oauth2 import id_token
import time
region = "<AWS region>"
tenantId = "<Azure tenant ID>"
subscriptionId = "<Azure subscription ID>"
clientId = "<User managed identity client ID>"
audience = "<Service account email or instance name>"
# Get a federated identity token.
credentials, project_id = google.auth.default()
request = google.auth.transport.requests.Request()
credentials.refresh(request)
id_token = id_token.fetch_id_token(request, audience)
# Create a custom class that acts as a credential.
class AssertionCredential(TokenCredential):
def __init__(self, client_id, tenant_id, id_token):
self.app = ConfidentialClientApplication(
client_id,
client_credential={
'client_assertion': id_token
},
authority=f"https://{AzureAuthorityHosts.AZURE_PUBLIC_CLOUD}/{tenant_id}"
)
def get_token(
self,
*scopes: str
) -> AccessToken:
token = self.app.acquire_token_for_client(scopes)
if 'error' in token:
raise Exception(token['error_description'])
expires_on = time.time() + token['expires_in']
return AccessToken(token['access_token'], int(expires_on))
# Get a credential based on the identity token.
creds = AssertionCredential(
tenant_id = tenantId,
client_id = clientId,
id_token = id_token
)
# Get a list of resources from Azure
resource_client = ResourceManagementClient(creds, subscriptionId)
resources = resource_client.resources.list()
column_width = 36
print("Resource".ljust(column_width) + "Type".ljust(column_width))
print("-" * (column_width * 2))
for resource in list(resources):
print(f"{resource.name:<{column_width}}{resource.type:<{column_width}}")
Accessing GCP resources from AWS
Google has provided a good amount of documentation on how to do this process. If you have a slightly different setup, you might want to have a look at some of their examples.
Configure workload identity federation with AWS or Azure
We are going to use gcloud CLI from a Linux command prompt for this example as that is the most used platform and method for accessing GCP and AWS from the command line. You can use gcloud and AWS CLI commands from Windows in a similar fashion.
On the AWS side
Create IAM role
You will need to have an existing EC2 instance that you can access remotely using SSH.
We will create an IAM role, an IAM instance profile, and attach the instance profile/role to the EC2 instance.
You will need to create an IAM role with a trust relationship (assume role policy document) that specifically allows EC2 service to assume it.
Bash
instanceId="<Your EC2 instance ID>"
roleName="source-workload"
cat << EOF > /tmp/iam-role-trustDoc.json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
aws iam create-role --role-name $roleName --assume-role-policy-document file:///tmp/iam-role-trustDoc.json
aws iam create-instance-profile --instance-profile-name $roleName
aws iam add-role-to-instance-profile --instance-profile-name $roleName --role-name $roleName
cat << EOF > /tmp/iam-role-policyDoc.json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowGeneratingToken",
"Effect": "Allow",
"Action": [
"cognito-identity:GetOpenIdTokenForDeveloperIdentity"
],
"Resource": "<ARN of your Cognito identity pool>"
},
{
"Sid": "AllowStsAccess",
"Effect": "Allow",
"Action": [
"sts:GetSessionToken",
"sts:GetFederationToken",
"sts:GetAccessKeyInfo",
"sts:GetCallerIdentity",
"sts:GetServiceBearerToken"
],
"Resource": "*"
}
]
}
EOF
aws iam attach-role-policy --role-name $roleName --policy-name auth-access --policy-document file:///tmp/iam-role-policyDoc.json
aws ec2 associate-iam-instance-profile --iam-instance-profile $roleName --instance-id $instanceId
On the GCP side
Creating GCP resources
Bash
projectId="<GCP project ID>"
identityPool="aws-cloud-access"
providerId="aws-provider"
accountId="<AWS account ID>"
role="source-workload"
gcloud iam workload-identity-pools create $identityPool \
--location="global" \
--description="Allows resources in other clouds to access resources in GCP" \
--display-name=$identityPool
gcloud iam workload-identity-pools providers create-aws $providerId \
--location="global" \
--workload-identity-pool=$identityPool \
--account-id="$accountId" \
--attribute-mapping="google.subject=assertion.arn,attribute.account=assertion.account,attribute.aws_role=assertion.arn.extract('assumed-role/{role}/'),attribute.aws_ec2_instance=assertion.arn.extract('assumed-role/{role_and_session}').extract('/{session}')" \
--attribute-condition="assertion.arn.startsWith('arn:aws:sts::$accountId:assumed-role/')"
gcloud projects describe $(gcloud config get-value core/project) --format=value\(projectNumber\)
gcloud iam service-accounts add-iam-policy-binding $serviceAccountId \
--role=roles/iam.workloadIdentityUser \
--member="principal://iam.googleapis.com/projects/$projectId/locations/global/workloadIdentityPools/$identityPool/attribute.aws_role/$role"
gcloud iam workload-identity-pools create-cred-config \
projects/$projectId/locations/global/workloadIdentityPools/$identityPool/providers/$providerId \
--service-account=$serviceAccountEmail \
--service-account-token-lifetime-seconds=3600 \
--aws \
--output-file=gcp_auth.json
On the source workload
Copy the gcp_auth.json file created previously over to the source workload.
gcloud login --cred-file=gcp_auth.json
gcloud storage describe gs://azure-cross-cloud-test-$projectNum
If the cross-cloud authentication worked, you should see details on the GCP storage bucket from your AWS instance without the need to store credentials that could leak and be used elsewhere.
Clean-up
gcloud storage buckets delete gs://azure-cross-cloud-test-$projectNum
gcloud iam service-accounts delete $serviceAccountEmail
gcloud iam workload-identity-pools delete $identityPool --location="global"
Accessing GCP resources from Azure
Google has provided a good amount of documentation on how to do this process. If you have a slightly different setup, you might want to have a look at some of their examples.
Configure workload identity federation with AWS or Azure
On the Azure side
You will need to configure the workload with a managed identity. You could use a system-assigned or user-assigned managed identity. If you use a user-assigned managed identity, make sure you store your managed identities in a separate protected resource group that most AD users do not have Contributor or higher access to. Anyone with access can generate tokens for the managed identity and impersonate the identity. This is why a system-assigned managed identity is preferred.
We are going to use PowerShell on the Azure side as it is a bit easier to work with then the Azure CLI command. This means you will need to have PowerShell, the Az module, and the Az.ManagedServiceIdentity sub-module installed before running the following. We are going to use a VM as the example source workload.
PowerShell
$resourceName = "<name of your existing resource>"
$resourceGroup = "<resource group name>"
$location = (Get-AzResourceGroup -Name $resourceGroup).Location
# If the Virtual Machine doesn't already have a managed identity, use the following to add one.
$vm = Get-AzVM -ResourceGroupName $resourceGroup -Name $resourceName
Update-AzVm -ResourceGroupName $resourceGroup -VM $vm -IdentityType SystemAssigned
# Get the application ID and object ID for the managed identity.
$vm = Get-AzVM -ResourceGroupName $resourceGroup -Name $resourceName
Get-AzADServicePrincipal -ObjectId $vm.Identity.PrincipalId
Write down the ApplicationId (client ID) and Id (object ID) returned as we will need that when granting access within AWS.
On the GCP side
You will need to enable the IAM Service Account Credentials API (iam.googleapis.com) from the GCP console before proceeding.
How to enable an API
We are going to use gcloud CLI from a Linux command prompt for this example as that is the most used platform and method for accessing GCP from the command line. You can use the gcloud CLI command from Windows in a similar fashion.
Creating GCP resources
Bash
projectId="<GCP project ID>"
identityPool="azure-cloud-access"
providerId="azure-provider"
accountId="<AWS account ID>"
tenantId="<Azure tenant ID>"
clientId="<Managed identity client ID>"
objectId="<Managed identity object ID>"
serviceAccountName="azure-oidc-test"
serviceAccountEmail="${serviceAccountName}@${projectId}.iam.gserviceaccount.com"
gcloud iam workload-identity-pools create $identityPool --location="global" --description="Allows resources in other clouds to access resources in GCP" --display-name=$identityPool
gcloud iam workload-identity-pools providers create-oidc $providerId --location="global" --workload-identity-pool=$identityPool --issuer-uri="https://sts.windows.net/$tenantId" --allowed-audiences=$clientId --attribute-mapping="google.subject=assertion.sub,google.groups=assertion.groups"
projectNum=$(gcloud projects describe $projectId --format='value(projectNumber)')
gcloud iam service-accounts create $serviceAccountName \
--description="Controlled by $clientId from Azure" \
--display-name=$serviceAccountName
gcloud iam service-accounts add-iam-policy-binding $serviceAccountEmail \
--role=roles/iam.workloadIdentityUser \
--member="principal://iam.googleapis.com/projects/$projectNum/locations/global/workloadIdentityPools/$identityPool/subject/$objectId"
gcloud iam workload-identity-pools create-cred-config \
projects/$projectNum/locations/global/workloadIdentityPools/$identityPool/providers/$providerId \
--service-account=$serviceAccountEmail \
--service-account-token-lifetime-seconds=3600 \
--azure \
--app-id-uri $clientId \
--output-file=gcp_auth.json
# Create a temporary resource we can try accessing
gcloud storage buckets create gs://azure-cross-cloud-test-$projectNum --location=northamerica-northeast2 --pap --uniform-bucket-level-access
gcloud storage buckets add-iam-policy-binding gs://azure-cross-cloud-test-$projectNum --member=serviceAccount:$serviceAccountEmail --role=roles/storage.legacyBucketOwner
If the cross-cloud authentication worked, you should see details on the GCP storage bucket from your Azure VM without the need to store credentials that could leak and be used elsewhere.
On the source workload
Copy the gcp_auth.json file created previously over to the source workload.
gcloud login --cred-file=gcp_auth.json
gcloud storage describe gs://azure-cross-cloud-test-$projectNum
Clean-up
gcloud storage buckets delete gs://azure-cross-cloud-test-$projectNum
gcloud iam service-accounts delete $serviceAccountEmail
gcloud iam workload-identity-pools delete $identityPool --location="global"