Simplifying FedRAMP Compliance with Teleport
Jun 27
Virtual
Register Today
Teleport logoTry For Free
Fork me on GitHub

Teleport

Migrate to Cloud-Hosted Teleport Enterprise

Migrating from a self-hosted Teleport Enterprise deployment to a cloud-hosted

Teleport Enterprise deployment offers scalability, reliability, and ease of management. By following the steps outlined in this guide, you can transition your Teleport deployment to the cloud successfully while ensuring minimal disruption to your operations.

How it works

While in a cloud-hosted Enterprise Teleport account, Teleport manages the Auth Service and Proxy Service for you, you need to migrate dynamic resources and Teleport services yourself.

To migrate a self-hosted Teleport Enterprise cluster to a cloud-hosted Teleport Enterprise cluster:

  1. Set up a separate cloud-hosted Teleport account.
  2. Retrieve dynamic Teleport resources from the Auth Service backend on the self-hosted cluster and apply them against the Auth Service backend on the cloud-hosted cluster.
  3. Reconfigure Teleport agents and plugins to connect to the cloud-hosted Teleport cluster.
  4. Verify that the migration has succeeded.

Prerequisites

  • An existing Teleport Enterprise (self-hosted) cluster.
  • The tsh and tctl client tools. This guide assumes that you are using tctl to manage dynamic resources, but it is also possible to use Teleport Terraform provider and Kubernetes operator, in addition to custom scripts that use the Teleport API to manage the Teleport Auth Service backend.
  • An account with no trusted clusters enrolled. Trusted clusters are not supported in cloud-hosted Teleport Enterprise accounts. You will not be able to migrate trusted cluster resources.

Step 1/4. Deploy your cloud-hosted Teleport Enterprise cluster

  1. Determine the teleport.sh subdomain you would like to use for your new cloud-hosted Teleport Enterprise account.

  2. If the license dashboard for your self-hosted Teleport Enterprise cluster is already using your desired subdomain, you can contact Teleport Support to free up the domain for re-use.

    Reach out to your Account Management team to set up your cloud-hosted Teleport Enterprise tenant.

  3. Ensure you are running Teleport Enterprise agents with versions that are lower than the cloud-hosted Teleport Enterprise tenant version. To check the versions of your Teleport Enterprise agents, you can use the tctl command to list the inventory of connected agents and their version:

    tctl inventory ls --older-than=<version>

Validate connectivity to both the new cloud-hosted Teleport Enterprise cluster and your self-hosted Teleport Enterprise cluster. You should be able to connect to both Teleport clusters and execute tctl commands using your current credentials.

  1. Log in to the self-hosted Teleport Enterprise cluster:

    Use the --auth flag instead of --user to log in with Single Sign-On.

    tsh login --proxy=enterprise.example.com --user=myuser
    tctl status
  2. Log in to the cloud-hosted Teleport Enterprise cluster:

    Use the --auth flag instead of --user to log in with Single Sign-On.

    tsh login --proxy=example.teleport.sh --user=myuser
    tctl status
  3. Subscribe to the Teleport Enterprise status website to stay current on any issues impacting the performance of your cloud-hosted cluster.

Ensure that the recovery codes displayed when you first set up your cloud-hosted Teleport Enterprise tenant are saved securely, so as not to lose access. For your security, Teleport Support cannot assist with resetting passwords or recovering lost credentials.

Step 2/4. Migrate Teleport resources

After ensuring that both your self-hosted and cloud-hosted Teleport Enterprise clusters are up and running, you can migrate dynamic Teleport resources from one cluster to the next.

Dynamic Teleport resources such as roles and local users are stored on the Teleport Auth Service backend. Since your self-hosted Teleport Enterprise cluster uses a separate Auth Service backend from your cloud-hosted cluster, you must retrieve the resources on the first backend, then re-apply them against the second backend.

Review the dynamic resources list to see if any other resources need to be migrated. Some common dynamic resources includes:

  • windows_desktop
  • apps
  • dbs
  • login_rule

If you are using infrastructure-as-code tools to To achieve this:

  1. Log in to your existing Teleport Enterprise (self-hosted) cluster and export a collection of the above-mentioned dynamic resource configuration using the tctl administrative tool. An example is shown below:

    Use the --auth flag instead of --user to log in with Single Sign-On.

    tsh login --proxy=teleport.example.com --user=admin@example.com
    tctl get roles > roles.yaml
    tctl get users > users.yaml
  2. Once you have the resource configuration file from the above, proceed to log in to your cloud-hosted Teleport Enterprise tenant with an admin user and create the resources from the exported files:

    Use the --auth flag instead of --user to log in with Single Sign-On.

    tsh login --proxy=example.teleport.sh --user=admin@example.com
    tctl create -f roles.yaml
    tctl create -f user.yaml

For your SSO auth connector, most SSO integrations only work for a single configured endpoint. It is recommended to create a separate SSO connector in your Identity Provider specifically for the cloud-hosted Teleport Enterprise endpoint, and configure a new Auth Connector in the cloud-hosted Teleport Enterprise tenant.

Step 3/4. Migrate Teleport services and plugins

To migrate services such as Teleport agents, Machine ID Bots, and plugins, start by cataloging the various services you're managing with Teleport. The following resources should be considered for migration:

  • Teleport agents
  • Machine ID Bots
  • Access Request plugins
  • The Teleport Event Handler

Before migrating services, make sure you are logged in to your new cloud-hosted Teleport Enterprise account.

You can migrate Teleport services all at once or gradually, depending on your business requirements. If running Teleport at scale, you'll generally want to use a configuration management tool to automate and streamline the process of carrying out the actions involved in migrating agent configurations.

Migrate Teleport agents

To migrate Teleport agents:

  1. For each agent and Machine ID bot, obtain a valid join token. We recommend using a delegated join method.

  2. If using ephemeral tokens, ensure you specify the appropriate token type to match the Teleport services. Token types can include node, app, kube, db, windowsdesktop and others depending on the service you wish to join to your Teleport Cluster.

  3. In the following example, a new token is created with the TTL of fifteen minutes:

    tctl tokens add --type node,app,db --ttl 15m

    In this command, we assigned the token the node, app and db type, indicating that it will allow an agent to join which is running the Teleport ssh_service, db_service and app_service.

    Copy the token so you can use it later in this guide.

  4. Stop Teleport services on the agent (if applicable).

  5. Update the proxy_server or auth_servers field in the agent configuration file to point to the address of your cloud-hosted Teleport Enterprise cluster. By default, on Linux servers, the configuration is located in the /etc/teleport.yaml directory:

    version: v3
    teleport:
       proxy_server: example.teleport.sh:443
    

    If your agent configuration does not include a teleport.proxy_server field, and instead has a teleport.auth_server or teleport.auth_servers field, we recommend migrating your configuration to version: v3 and using teleport.proxy_server.

    With the teleport.proxy_server field, the agent attempts to connect to the Teleport cluster using a single mode, rather than multiple modes, which takes less time and involves less functionality to troubleshoot.

  6. Update the auth_token or join_params.token_name field with the newly generated token.

    teleport:
      join_params:
        method: token
        token_name: new-token-goes-here
    
  7. On Linux servers, delete the local agent cache to force the agent to rejoin the new cloud-hosted Teleport Enterprise cluster. By default, the data is located in the /var/lib/teleport directory:

    rm -rf /var/lib/teleport
  8. On Linux servers, restart the Teleport process on each agent or, if you are using the teleport-kube-agent Helm chart, recycle the Kubernetes pod to apply the new configuration. This will cause the agents to re-register with the cloud-hosted Teleport Enterprise cluster and obtain new certificates signed by the new cluster's certificate authority.

Migrate Machine ID bots

In general, you can migrate a Machine ID bot using the following steps:

  1. Obtain a new join token.
  2. In the tbot configuration file, edit the proxy_server configuration field to point to the new Teleport cluster address and port 443.
  3. Restart tbot.

To learn how to restart and configure a Machine ID bot in your infrastructure, read the full documentation on deploying a Machine ID Bot.

Access Request plugins and the Event Handler

In general, you can migrate Teleport plugins using the following steps:

  1. If you are using Machine ID to produce credentials for the plugin, reconfigure the Machine ID Bot to connect to the new Teleport cluster and restart the Bot.

    Otherwise, connect to the new Teleport cluster with tctl and produce an identity file manually, then make it available to the plugin.

  2. Reconfigure the plugin by editing the teleport.address field of the plugin configuration file to point to the address of the new Teleport cluster, with port 443.

  3. Restart the plugin.

For specific plugins running in your infrastructure, read the full documentation on:

Step 4/4. Verify end user access and performance

Once you have migrated dynamic resources and reconfigured services to connect to your new Teleport cluster, ensure that the setup is complete.

  1. Validate that all expected resources are present in the Teleport cluster and verify their status and connectivity to ensure they are properly registered and available for connections. You can either check via the Web UI, or use tctl to get a list of all resources and verify their registration and status.

    For example, to list all nodes registered with the Teleport Cluster, you can run the command below:

    tctl nodes ls

    Similarly, to list all other registered resources, you can run the commands below:

    List all registered Kubernetes clusters:

    tctl kube ls

    List all registered databases:

    tctl db ls

    List all registered applications:

    tctl apps ls

    List all registered Windows desktops:

    tctl get windows_desktop
  2. Ensure that end users have the expected SSO access to your infrastructure.

  3. Establish break-glass access procedures to ensure access to infrastructure in case your cloud-hosted Teleport Enterprise cluster becomes unavailable.

    For example, you can run OpenSSH with a limited key following our best practices on How to SSH properly.

We recommend configuring systemd to start OpenSSH for 5 minutes at boot, then shut it down. The master keys should be stored in a secure vault. To break the glass, obtain the master key, reboot the server, and connect using an OpenSSH client within 5 minutes.

Further reading

For more information on using cloud-hosted Teleport Enterprise, refer to our documentation on signing up for a cloud-hosted Teleport Enterprise account.