The PTD (Posit Team Dedicated) CLI is a command-line tool for managing Posit Team Dedicated environments across multiple cloud providers (AWS and Azure). It provides a unified interface for deploying, managing, and interacting with both control room and workload environments.
Implementation: Go (using Cobra framework)
Location: /cmd directory
Main entry point: /cmd/main.go
Build and install the CLI:
just cliThis compiles the CLI and places the binary in ~/.local/bin/ptd (ensure this is in your PATH).
The CLI searches for configuration files in the following order:
~/.config/ptd/ptdconfig.yaml~/.local/share/ptd/ptdconfig.yaml./ptdconfig.yaml(current directory)~/ptdconfig.yaml(home directory)
All configuration can be overridden using environment variables with the PTD_ prefix:
PTD_VERBOSE=true- Enable verbose loggingPTD_TARGETS_CONFIG_DIR- Path to targets configuration directory, applies to all commands which accept a control room or workload target name (see Custom Targets Configuration Directory)PROJECT_ROOT- Override project root directory
All commands support these global flags:
| Flag | Short | Type | Default | Description |
|---|---|---|---|---|
--verbose |
-v |
bool | false | Enable verbose/debug output |
--targets-config-dir |
string | ./infra |
Path to targets configuration directory (absolute or relative to project root) |
Note: --targets-config-dir applies to all commands which accept a control room or workload target name. For detailed information about configuring custom targets directories, see the Custom Targets Configuration Directory guide.
The CLI determines the project root in this order:
PROJECT_ROOTenvironment variable- Binary location (2 levels up from
.local/bin/ptd) - Git repository root
Print the version number of the PTD CLI.
Usage:
ptd versionExample:
$ ptd version
PTD CLI v1.0.0Implementation: /cmd/version.go:13
Manage PTD configuration files and settings.
Show the current configuration values and which config file is being used.
Usage:
ptd config showExample Output:
PTD Configuration
================
Config file: /Users/username/.config/ptd/ptdconfig.yaml
Configuration values:
verbose: false
top: /Users/username/source/ptd
Implementation: /cmd/config.go:21
Initialize a new configuration file with default values at ~/.config/ptd/ptdconfig.yaml.
Usage:
ptd config initExample:
$ ptd config init
Configuration file created: /Users/username/.config/ptd/ptdconfig.yaml
You can now edit this file to customize your ptd settings.Implementation: /cmd/config.go:49
Show the paths where PTD looks for configuration files.
Usage:
ptd config pathExample Output:
PTD configuration file search paths:
1. /Users/username/.config/ptd/ptdconfig.yaml
2. /Users/username/.local/share/ptd/ptdconfig.yaml
3. ./ptdconfig.yaml (current directory)
4. /Users/username/ptdconfig.yaml (home directory)
Environment variables with 'PTD_' prefix are also read automatically.
Implementation: /cmd/config.go:58
Assume the admin role in a target account and export credentials.
Usage:
ptd assume <target> [flags]Arguments:
<target>- Target name (supports auto-completion from available targets)
Flags:
| Flag | Short | Type | Default | Description |
|---|---|---|---|---|
--export |
-e |
bool | true | Export the role credentials |
Examples:
Export AWS credentials for a target:
$ ptd assume testing01-staging
# Exporting session for arn:aws:sts::123456789012:assumed-role/admin.posit.team/user@example.com
# In order to use this directly, run:
# eval $(ptd assume testing01-staging)
export AWS_ACCESS_KEY_ID=ASIA...
export AWS_SECRET_ACCESS_KEY=...
export AWS_SESSION_TOKEN=...Evaluate credentials directly in your shell:
eval $(ptd assume testing01-staging)For Azure targets:
$ ptd assume azure-target
# Azure session: user@example.com
# Azure credentials are not exported, the `az` cli state is set instead.Implementation: /cmd/assume.go:19
Ensure a target is converged by running infrastructure deployment steps. This command orchestrates the deployment using Pulumi to bring the target to its desired state.
See Ensure Command Flow for details on resources managed by this command.
Usage:
ptd ensure <target> [flags]Arguments:
<target>- Target name (supports auto-completion from available targets)
Flags:
| Flag | Short | Type | Default | Description |
|---|---|---|---|---|
--dry-run |
-n |
bool | false | Dry run the command without making changes |
--preview |
-p |
bool | true | Preview the stack changes before applying |
--cancel |
-c |
bool | false | Clear locks from the stack |
--refresh |
-r |
bool | false | Refresh the stack state before applying |
--auto-apply |
-a |
bool | false | Skip manual approval and automatically apply changes |
--destroy |
bool | false | Destroy the Pulumi stack | |
--list-steps |
-l |
bool | false | List all steps for the target (including custom steps) and exit |
--start-at-step |
string | "" | Start at a specific step (supports tab completion) | |
--only-steps |
[]string | nil | Only run specific steps (supports tab completion) | |
--exclude-resources |
[]string | nil | Exclude specific resources from the ensure process | |
--target-resources |
[]string | nil | Target specific resources for the ensure process |
Step Names:
Available steps vary by target type (workload vs control room). Steps are defined in /lib/steps/.
Common workload steps (in order):
bootstrap- Initial infrastructure setup- Creates Pulumi state storage (S3 bucket or Azure blob storage)
- Creates encryption keys (KMS for AWS, Key Vault for Azure)
- Initializes secrets for workload and sites
- Requires: Control room target configuration
persistent- Persistent resources (storage, databases)- Creates RDS/Azure Database instances
- Creates file systems (EFS/Azure Files)
- Creates S3/blob storage buckets for chronicle and package manager
- Outputs: Database URLs, file system DNS names, mimir password
postgres_config- PostgreSQL database configuration- Configures PostgreSQL databases and users
- Requires: Database endpoints from persistent step
- Requires: Proxy connection (if Tailscale not enabled)
images- Copy container images- Copies Posit product images from control room registry to workload registry
- Requires: Source (control room) registry credentials
- Requires: Destination (workload) registry credentials
registry- Container registry setup (ecr_cache for AWS, acr_cache for Azure)- Creates pull-through cache rules for Docker Hub
- Requires: Docker Hub OAT from control room secret store
kubernetes- Kubernetes cluster setup (eks for AWS, aks for Azure)- Creates EKS or AKS Kubernetes cluster
- Configures cluster networking and security
- Requires: Proxy connection (if Tailscale not enabled)
clusters- Cluster configuration- Configures Kubernetes cluster resources and add-ons
- Requires: Kubernetes cluster from previous step
- Requires: Proxy connection
helm- Helm chart deployment- Deploys Posit Team products via Helm charts
- Requires: Kubernetes cluster access
- Requires: Proxy connection (if Tailscale not enabled)
sites- Site configuration- Configures individual Posit Team sites
- Requires: Kubernetes cluster access
- Requires: Proxy connection
persistent_reprise- Final persistent resource updates- Re-runs persistent step to update secrets with final state
- Updates workload secrets and control room mimir passwords
Common control room steps (in order):
workspaces- Workspace configuration- Creates workspaces infrastructure for control room
- Configures workspace resources via Pulumi
persistent- Persistent resources (storage, databases)- Creates RDS/Azure Database instances
- Creates file systems and storage resources
- Outputs: Database URLs and connection information
postgres_config- PostgreSQL database configuration- Configures PostgreSQL databases and users for control room
- Requires: Database endpoints from persistent step
- Requires: Proxy connection (if Tailscale not enabled)
cluster- Cluster setup- Creates and configures control room Kubernetes cluster
- Deploys cluster infrastructure and Helm charts
- Requires: Proxy connection
Note: Control rooms do not have a
bootstrapstep. Thebootstrapstep only applies to workloads.
Examples:
List all available steps for a target:
ptd ensure testing01-staging --list-stepsFull deployment with preview:
ptd ensure testing01-stagingAuto-apply without manual confirmation:
ptd ensure testing01-staging --auto-applyRun only specific steps:
ptd ensure testing01-staging --only-steps cluster,helmStart at a specific step:
ptd ensure testing01-staging --start-at-step helmDestroy a stack (runs steps in reverse order):
ptd ensure testing01-staging --destroyTarget specific resources:
ptd ensure testing01-staging --target-resources my-resourceExclude resources:
ptd ensure testing01-staging --exclude-resources problematic-resourceDry run to see what would change:
ptd ensure testing01-staging --dry-runImplementation: /cmd/ensure.go:50
Notes:
- For workload targets, automatically loads the associated control room target
- Automatically starts proxy session if required by steps and Tailscale is not enabled
- When
--destroyis specified, steps run in reverse order
Start an interactive shell or run a one-shot command with credentials, kubeconfig, and environment configured for a target. Optionally, work within a specific Pulumi stack directory.
Usage:
ptd workon <cluster> [step] [flags]
ptd workon <cluster> [step] -- <command> [args...]Arguments:
<cluster>- Target name (supports auto-completion)[step]- Optional: specific Pulumi step/stack to work on-- <command>- Optional: run a single command instead of an interactive shell
Examples:
Open shell with target credentials and kubeconfig:
ptd workon testing01-stagingWork on a specific step (opens shell in Pulumi stack directory):
ptd workon testing01-staging helmRun a one-shot kubectl command:
ptd workon testing01-staging -- kubectl get pods -n posit-teamRun a one-shot Pulumi command within a step:
ptd workon testing01-staging helm -- pulumi stack exportWhat it does:
- Loads target configuration
- Assumes appropriate credentials
- Starts a SOCKS proxy if needed (non-tailscale targets)
- Sets up kubeconfig using native SDK (no
aws/azCLI dependency) - Creates/loads Pulumi stack if step is specified
- Either:
- Interactive mode (no
--): opens a shell with full environment - Command mode (with
--): runs the command and exits with its exit code
- Interactive mode (no
Environment provided:
- Cloud provider credentials (AWS/Azure)
KUBECONFIGpointing to a configured kubeconfig filePTD_WORKON- Target name (and step if specified, e.g.,testing01-stagingortesting01-staging:helm)PULUMI_STACK_NAME(if custom step specified)- Working directory set to Pulumi stack (if step specified)
Shell prompt configuration:
To show the workon target in your shell prompt, add one of these to your shell config:
# Bash (~/.bashrc)
PS1='${PTD_WORKON:+[ptd:$PTD_WORKON] }'"$PS1"
# Zsh (~/.zshrc)
PROMPT='${PTD_WORKON:+[ptd:$PTD_WORKON] }'"$PROMPT"This displays [ptd:testing01-staging] when in a workon shell.
Exit code propagation: In command mode, the exit code of the executed command is propagated. This enables scripting and automation.
Implementation: /cmd/workon.go:25
Example sessions:
# Interactive shell
$ ptd workon testing01-staging helm
Starting interactive shell in /path/to/stack with session identity arn:aws:sts::123456789012:assumed-role/admin.posit.team/user@example.com
To exit the shell, type 'exit' or press Ctrl+D
# One-shot command
$ ptd workon ganso01-staging -- kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-152-102-54.us-east-2.compute.internal Ready <none> 9d v1.32.9-eks-ecaa3a6
# Exit code propagation
$ ptd workon ganso01-staging -- kubectl get nonexistent; echo $?
1Start a SOCKS5 proxy session to the bastion host in a given target. By default binds to localhost:1080 for interactive/browser use; --daemon uses a deterministic per-workload port (10000–19999).
Usage:
ptd proxy <target> [flags]Arguments:
<target>- Target name (supports auto-completion)
Flags:
| Flag | Short | Type | Default | Description |
|---|---|---|---|---|
--daemon |
-d |
bool | false | Run the proxy in the background |
--stop |
-s |
bool | false | Stop any running proxy session |
Examples:
Start proxy in foreground (blocks until Ctrl+C):
ptd proxy testing01-stagingStart proxy in background:
ptd proxy testing01-staging --daemonStop running proxy:
ptd proxy testing01-staging --stopImplementation: /cmd/proxy.go:26
Notes:
- Interactive proxy binds to
localhost:1080;--daemonbinds to a deterministic per-workload port (10000–19999) - Proxy session state is stored in
~/.local/share/ptd/proxies.json - Works with both AWS and Azure targets
- Automatically handles credential management
- Not needed if Tailscale is enabled for the target
Use cases:
- Access private Kubernetes clusters
- Connect to internal services
- Required for
ensurecommand when Tailscale is not enabled
Run k9s (Kubernetes CLI UI) on a target cluster with proper authentication and proxy configuration.
Usage:
ptd k9s <cluster> [flags]Arguments:
<cluster>- Target name (supports auto-completion)
Flags:
| Flag | Short | Type | Default | Description |
|---|---|---|---|---|
--namespace |
-n |
string | "posit-team" | Namespace to focus on |
--args |
[]string | [] | Additional arguments to pass to k9s |
Examples:
Open k9s in default namespace:
ptd k9s testing01-stagingOpen k9s in specific namespace:
ptd k9s testing01-staging -n kube-systemPass additional k9s arguments:
ptd k9s testing01-staging --args="--readonly"What it does:
- Loads target configuration
- Starts proxy session (if needed and Tailscale not enabled)
- Assumes credentials
- Creates temporary kubeconfig with:
- Proper cluster configuration
- SOCKS5 proxy settings (if needed)
- Authentication configured
- Launches k9s with configured environment
Implementation: /cmd/k9s.go:30
Notes:
- Automatically handles cluster name resolution for both control room and workload targets
- For AWS EKS clusters, uses
aws eks update-kubeconfig - Kubeconfig is temporary and stored at
/tmp/kubeconfig-{target-hash} - Checks Tailscale connection status if enabled
Cluster naming patterns:
- Control room:
main01-{environment}(e.g.,control01-staging) - Workload:
{target_name}-{release}(e.g.,testing01-main)
Return a stable hash value for a target name. Useful for generating unique identifiers based on target names.
Usage:
ptd hash <target>Arguments:
<target>- Target name (supports auto-completion)
Example:
$ ptd hash testing01-staging
a1b2c3d4Implementation: /cmd/hash.go:14
Use cases:
- Generate unique resource names
- Create consistent identifiers across deployments
- Useful in scripts and automation
Run administrative commands for managing PTD infrastructure.
Generate the admin principal role CloudFormation template for AWS accounts.
Usage:
ptd admin generate-role <control-room-target> [flags]Arguments:
<control-room-target>- Control room target name (e.g.,control01-staging)
Examples:
ptd admin generate-role control01-staging > admin-role.yamlWhat it generates:
- CloudFormation template with:
- Managed policy:
PositTeamDedicatedAdminPolicy - IAM role:
admin.posit.team - Trust policy for authorized principals (from control room config)
- Permissions boundary
- Self-protection policies
- Managed policy:
Usage: Deploy the generated template to AWS accounts to set up admin access:
ptd admin generate-role control01-staging > template.yaml
aws cloudformation create-stack \
--stack-name ptd-admin-role \
--template-body file://template.yaml \
--capabilities CAPABILITY_NAMED_IAM \
--parameters ParameterKey=TrustedPrincipals,ParameterValue="arn:aws:iam::123456789012:user/admin"The admin.posit.team IAM role is used by PTD to manage infrastructure in each AWS account. This role must exist in every AWS account that PTD manages (both control room and workload accounts). The role name is hardcoded — PTD always assumes admin.posit.team unless a custom_role is configured in the workload's ptd.yaml.
Setup steps:
- Generate the CloudFormation template:
ptd admin generate-role <control-room-target> > template.yaml
- Review the generated template. The exact permissions are defined in code (
lib/aws/iam.go) and may change between PTD versions. Always inspect the template before deploying. - Deploy to each AWS account that PTD will manage (control room and workload accounts):
aws cloudformation create-stack \ --stack-name ptd-admin-role \ --template-body file://template.yaml \ --capabilities CAPABILITY_NAMED_IAM \ --parameters ParameterKey=TrustedPrincipals,ParameterValue="<principal-arns>" - Add the deploying principals to
trusted_principalsin the control room'sptd.yaml.
What the template creates:
PositTeamDedicatedAdminPolicy— a managed policy granting permissions across AWS services used by PTD (EC2, EKS, S3, RDS, Route 53, IAM, KMS, Secrets Manager, ACM, SSM, ECR, and others). The policy is self-constraining:- IAM operations are scoped to resources matching
*.posit.teamnaming patterns - S3 operations are scoped to buckets prefixed with
posit-*orptd-* - The role cannot modify the
PositTeamDedicatedAdminPolicyitself (prevents privilege escalation) - All IAM roles PTD creates during deployment must use this policy as a permissions boundary
- IAM operations are scoped to resources matching
admin.posit.team— an IAM role with the above policy attached and set as its permissions boundary. The trust policy allows assumption by the principals specified in theTrustedPrincipalsparameter.
Custom roles:
If your organization cannot use the standard admin.posit.team role, you can configure an alternative via custom_role in the workload's ptd.yaml:
custom_role:
role_arn: "arn:aws:iam::123456789012:role/my-custom-role"
external_id: "optional-external-id"The custom role must have equivalent permissions to PositTeamDedicatedAdminPolicy. Generate the template and use it as a reference when building your custom policy.
Many commands support auto-completion for <target> arguments. This is powered by the ValidTargetArgs function which reads available targets from ptd.yaml files.
Implementation: /cmd/internal/legacy/ptd_config.go
To enable shell completion:
# Bash
ptd completion bash > /etc/bash_completion.d/ptd
# Zsh
ptd completion zsh > "${fpath[1]}/_ptd"
# Fish
ptd completion fish > ~/.config/fish/completions/ptd.fishAll commands follow the Cobra pattern:
- Each command defined in its own file under
/cmd/ - Commands register themselves in
init()functions - Main entry point at
/cmd/main.go
Located in /lib/:
aws/- AWS-specific implementations (credentials, EKS, IAM, proxy, S3, SSM)azure/- Azure-specific implementations (credentials, ACR, AKS, Key Vault, proxy, storage)steps/- Deployment step definitions (bootstrap, cluster, helm, images, persistent, workspaces, sites)types/- Core type definitions (Target, Credentials, etc.)proxy/- Proxy session managementpulumi/- Pulumi integration (inline, Python)helpers/- Utility functions (file operations, networking, process management)secrets/- Secret managementcontainers/- Container operationshumans/- User/principal management
Targets are loaded from ptd.yaml files and implement the types.Target interface:
- AWS targets:
aws.Target(implements for AWS EKS) - Azure targets:
azure.Target(implements for Azure AKS)
Target features:
- Cloud provider abstraction
- Credential management
- Region configuration
- Proxy requirements
- Tailscale support
- Control room vs workload distinction
Credentials are managed through the types.Credentials interface:
Identity()- Returns identity stringEnvVars()- Returns environment variables map
Implementations:
- AWS: Assumes IAM roles, returns temporary credentials
- Azure: Uses Azure CLI authentication
Proxy sessions enable secure access to private resources:
- SOCKS5 proxy; interactive mode binds to
localhost:1080, daemon/ensure/workon use deterministic per-workload ports (10000–19999) - Managed lifecycle (Start/Stop/Wait)
- State persistence in
~/.local/share/ptd/proxies.json - Automatic integration with ensure, k9s commands
AWS: Uses SSM Session Manager (aws ssm start-session --target <bastion-instance>)
Azure: Uses Azure Bastion proxy connection (az network bastion tunnel)
just clijust test-cmd- Create new file in
/cmd/(e.g.,newcommand.go) - Define command using Cobra:
var newCmd = &cobra.Command{
Use: "new <arg>",
Short: "Short description",
Long: `Long description`,
Run: func(cmd *cobra.Command, args []string) {
// Implementation
},
}
func init() {
rootCmd.AddCommand(newCmd)
// Add flags if needed
}- Add any required flags in
init() - Implement command logic
- Add tests in
newcommand_test.go
Uses Go's log/slog package with charmbracelet/log for terminal output:
slog.Info()- General informationslog.Debug()- Debug information (requires--verbose)slog.Warn()- Warningsslog.Error()- Errors
Control log level:
ptd --verbose <command> # Enable debug logging# 1. Ensure control room is up
ptd ensure control01-staging --auto-apply
# 2. Deploy workload
ptd ensure testing01-staging --auto-apply
# 3. Access the cluster
ptd k9s testing01-staging# 1. Open interactive shell
ptd workon testing01-staging helm
# 2. Manually run Pulumi commands
pulumi preview
pulumi up
# 3. Check specific resources
pulumi stack output
pulumi logs# Preview changes
ptd ensure testing01-staging
# Apply after review
ptd ensure testing01-staging --auto-apply# Start proxy in background
ptd proxy testing01-staging --daemon
# Configure application to use the SOCKS5 proxy
export HTTPS_PROXY=socks5://localhost:$(ptd proxy port testing01-staging)
# When done, stop proxy
ptd proxy testing01-staging --stopEnsure ~/.local/bin is in your PATH:
export PATH="$HOME/.local/bin:$PATH"Verify you can assume the role:
ptd assume <target> -vCheck your AWS/Azure CLI is configured:
aws sts get-caller-identity
az account show- Check bastion instance is running
- Verify security groups allow SSM/Bastion traffic
- Try manual proxy connection
- Enable verbose logging:
ptd proxy <target> -v
- Verify cluster exists:
aws eks list-clusters --region <region> - Check kubeconfig:
cat /tmp/kubeconfig-<hash> - Test kubectl:
kubectl --kubeconfig /tmp/kubeconfig-<hash> get nodes - Enable verbose logging:
ptd k9s <target> -v
- Check stack exists:
pulumi stack ls - Verify credentials:
ptd assume <target> - Try clearing locks:
ptd ensure <target> --cancel - Work interactively:
ptd workon <target> <step>
Example configuration file:
verbose: false
# Add custom configuration values as neededTarget configurations are defined in ptd.yaml files throughout the /infra directory. These are loaded by the CLI's internal legacy configuration system.
Example structure:
targets:
testing01-staging:
cloud_provider: aws
region: us-east-1
control_room: false
tailscale_enabled: false
# Additional target-specific configurationThe force_maintenance option enables cluster version upgrades to proceed even when they would normally be blocked by safety checks.
clusters:
"20250115":
spec:
cluster_version: "1.33"
force_maintenance: true # Bypass upgrade-blocking checks| Cloud Provider | Behavior |
|---|---|
| AWS EKS | Sets ForceUpdateVersion on the cluster, which overrides upgrade-blocking readiness checks including EKS Insights validations (deprecated APIs, compatibility issues, cluster health checks) |
| Azure AKS | Sets UpgradeSettings.OverrideSettings.ForceUpgrade with a 24-hour expiration window, which bypasses PodDisruptionBudget (PDB) constraints and takes precedence over all other drain configurations |
When to use:
- During planned maintenance windows when you accept workload disruption
- When PDBs are blocking necessary security or version upgrades (Azure)
- When EKS upgrade insights are blocking an upgrade you've assessed as safe (AWS)
- When you need to force through an upgrade that has stalled
Caution:
- Azure: Bypasses PodDisruptionBudget protections, which may cause service disruption. Pods protected by PDBs may be evicted without respecting minimum availability guarantees.
- AWS: Bypasses pre-upgrade validation checks. Review EKS Insights warnings before forcing an upgrade to understand what issues are being overridden.
- Only enable temporarily during maintenance windows, then set back to
false
Default: false (safety checks are respected during upgrades)
- Custom Targets Configuration Directory - Configure custom target directories
- Ensure Command Flow
- Main README - Project overview
- Development Environment Guide - Setup prerequisites
- Justfile - Build and development tasks
- Team Operator - Kubernetes operator
- Python Pulumi (Legacy) - Legacy Python CLI
type Target interface {
Name() string
Region() string
CloudProvider() CloudProvider
ControlRoom() bool
Credentials(ctx context.Context) (Credentials, error)
HashName() string
TailscaleEnabled() bool
PulumiBackendUrl() string
PulumiSecretsProviderKey() string
}type Credentials interface {
Identity() string
EnvVars() map[string]string
}type Step interface {
Name() string
Set(target Target, controlRoom Target, opts StepOptions)
Run(ctx context.Context) error
}Last updated: 2026