Skip to content
View rusets's full-sized avatar

Block or report rusets

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
rusets/README.md

Ruslan Dashkin — Cloud & Platform Engineer

I design and operate production-grade AWS platforms with automation as the default. My work focuses on performance, cost efficiency, and secure delivery through scale-to-zero patterns, OIDC-based authentication, and modular Terraform architecture.

My focus:

  • AWS infrastructure as code (Terraform, remote state, policy-as-code)
  • CI/CD pipelines with GitHub Actions and OIDC (no long-lived AWS keys)
  • Scale-to-zero patterns (wake on demand, auto-sleep when idle)
  • Observability-first systems (metrics, dashboards, saturation visibility)
  • Infrastructure platforms built for real-world behavior under load

Core Tech Stack

  • Cloud Platforms: AWS (EC2, ECS Fargate, Lambda, API Gateway, RDS, DynamoDB, S3, CloudFront, Route 53, SageMaker)

  • Infrastructure as Code: Terraform (remote state & locking, modular design, policy-as-code), tflint, tfsec, checkov

  • CI/CD & Security: GitHub Actions, OIDC (keyless authentication), automated plan/apply workflows, least-privilege IAM

  • Containers & Orchestration: Docker, Kubernetes (k3s), Helm, ECS Fargate

  • AI & GPU Platforms: vLLM serving, concurrency control, bounded queueing, GPU telemetry (DCGM)

  • Observability & Reliability: Prometheus, Grafana, CloudWatch, metrics-driven tuning, golden signals

  • Architecture Patterns: Scale-to-zero, event-driven systems, cost-aware design, automation-first infrastructure


Flagship Projects

AWS Multi-Tier Infra — Wake/Sleep Platform

Repo: https://github.com/rusets/aws-multi-tier-infra

A full 3-tier production-style web architecture on AWS featuring:

  • VPC, ALB, and EC2 application tier
  • RDS MySQL in private subnets (no public exposure)
  • S3 + CloudFront for static asset delivery
  • Wake-on-demand orchestration (Lambda → GitHub Actions → Terraform Apply)
  • Idle reaper automation that tears down infrastructure when unused
  • Remote state backend (S3 + DynamoDB lock table)
  • GitHub Actions OIDC for keyless CI/CD

Purpose: showcase a secure, automated, and cost-aware AWS architecture with real infrastructure lifecycle management.


GPU-Accelerated LLM Inference Service

Repo: https://github.com/rusets/gpu-llm-inference-service

Single-node GPU inference platform built around:

  • vLLM for high-performance LLM serving on a single GPU
  • Custom FastAPI gateway with explicit concurrency limits and bounded queueing
  • Deterministic backpressure (429 / 503) instead of uncontrolled latency growth
  • Prometheus + Grafana for full observability (latency, queue depth, saturation, GPU metrics)
  • Docker Compose stack for reproducible local deployment

Purpose: demonstrate how to operate a GPU-backed LLM service with controlled concurrency, measurable saturation, and production-style observability rather than just exposing a model endpoint.


Helmkube Autowake — Production-Style CI/CD Kubernetes Platform

Repo: https://github.com/rusets/helmkube-autowake-cicd

A compact k3s cluster on EC2, fully automated through Terraform and GitHub Actions:

  • k3s node bootstrap via user data + SSM
  • Helm-driven application deployment from CI/CD
  • Prometheus + Grafana monitoring stack deployed automatically
  • Wake/sleep automation to eliminate idle costs
  • OIDC authentication (no AWS keys)
  • Remote state, IAM roles, monitoring, and bootstrap scripts

Purpose: implement a fully automated Kubernetes environment on AWS with GitHub → Terraform → Helm delivery pipeline and integrated observability.


Docker ECS Deployment — Fargate + On-Demand Provisioning

Repo: https://github.com/rusets/docker-ecs-deployment

ECS Fargate–based container platform with:

  • GitHub Actions CI/CD (build → push to ECR → deploy to ECS)
  • On-demand provisioning via Lambda + EventBridge
  • Terraform-managed VPC, security groups, IAM roles, task definitions, and services
  • Remote state backend + OIDC (no long-lived AWS credentials)

Purpose: implement a cost-aware, event-driven container platform on AWS and illustrate when ECS Fargate is the pragmatic alternative to Kubernetes.


SageMaker Serverless ML Inference Platform

Repo: https://github.com/rusets/ml-sagemaker-serverless

End-to-end ML inference pipeline on AWS using:

  • SageMaker Serverless for fully managed model hosting
  • Lambda + API Gateway for a clean HTTP prediction API
  • Static UI on S3 + CloudFront (image upload → prediction result)
  • Terraform for complete infrastructure provisioning (IAM, API, buckets, hosting)
  • Keyless CI/CD pipeline (GitHub Actions → AWS OIDC)

Purpose: implement a fully serverless, production-style ML inference API with infrastructure-as-code, secure CI/CD, and zero idle compute management.


CI/CD Pipeline for Application Deployment — EC2 + Scale-to-Zero

Repo: https://github.com/rusets/CI-CD-Pipeline-for-Application-Deployment

A CI/CD-focused deployment platform for running a web application on EC2 with:

  • GitHub Actions pipelines (build → test → Terraform plan/apply)
  • Wake/sleep automation for cost-optimized EC2 usage
  • CloudWatch dashboards and alarms for runtime visibility
  • Separate Terraform stack for wake/status Lambdas and API Gateway
  • Remote state backend + OIDC authentication (no long-lived AWS keys)

Purpose: implement a clean separation between application delivery and infrastructure management, with cost-aware automation and secure, fully automated deployments.


RideBot Infra — Serverless “Ride Request” Bot

Repo: https://github.com/rusets/ridebot-infra

A fully serverless backend powering a Telegram-based ride request system, built using:

  • API Gateway + Lambda (event-driven HTTP backend)
  • DynamoDB for state management and request tracking
  • Amazon Location Service for geolocation processing
  • Terraform-managed IAM, routing, and infrastructure configuration

Purpose: implement an event-driven architecture with chat-platform integration, leveraging a pure pay-per-use serverless model with no idle infrastructure.


Portfolio & Business Site Infrastructure

rusets-portfolio

Repo: https://github.com/rusets/rusets-portfolio

Infrastructure powering my personal website https://rusets.com, built with:

  • Private S3 + CloudFront (Origin Access Control)
  • Route 53 + ACM (DNS validation & HTTPS)
  • Terraform-managed infrastructure (remote state, modular design)
  • GitHub Actions with OIDC for fully keyless CI/CD

Purpose: implement a secure, production-grade static hosting stack with least-privilege access, automated deployments, and zero long-lived credentials.

rdservicepros-site

Repo: https://github.com/rusets/rdservicepros-site

Production static hosting stack for my small business RD Service Pros (Navarre, FL), built with:

  • Private S3 + CloudFront distribution
  • HTTPS via ACM
  • GitHub Actions for automated deployments and cache invalidation
  • Cost-efficient, low-maintenance infrastructure

Purpose: implement a secure and automated static hosting architecture suitable for a real small business, with minimal operational overhead and predictable cost.


Certifications & Background

Certifications

  • AWS Certified Solutions Architect – Associate
  • AWS Certified Developer – Associate
  • AWS Certified AI Practitioner
  • AWS Certified Cloud Practitioner
  • Linux Essentials

Background

  • Hands-on experience designing and operating production-grade cloud platforms:
    • Terraform remote state backends with locking and state isolation
    • Multi-account and multi-domain AWS architectures
    • IAM least-privilege design and continuous security hardening
    • Observability-driven performance tuning and capacity planning
  • Background in hardware and high-performance GPU operations prior to transitioning fully into cloud and platform engineering

Connect

Pinned Loading

  1. aws-multi-tier-infra aws-multi-tier-infra Public

    Automated AWS Multi-Tier Infrastructure — Wake/Sleep Environment powered by Terraform, Lambda & GitHub Actions.

    HCL 3 1

  2. docker-ecs-deployment docker-ecs-deployment Public

    A fully automated, scale-to-zero AWS ECS Fargate platform — wake-on-demand via API Gateway + Lambda, auto-sleep via EventBridge, Terraform IaC, and GitHub Actions OIDC CI/CD. Zero idle cost. Clean,…

    HCL 2 1

  3. gpu-llm-inference-service gpu-llm-inference-service Public

    Single-GPU LLM inference service with concurrency control, bounded queueing, and Prometheus/Grafana observability.

    Python 1

  4. ml-sagemaker-serverless ml-sagemaker-serverless Public

    End-to-end serverless ML inference platform on AWS powered by SageMaker Serverless, Lambda, and Terraform. Production-ready architecture with secure IAM boundaries and global delivery.

    HCL 1

  5. helmkube-autowake-cicd helmkube-autowake-cicd Public

    Production-style k3s Kubernetes demo with on-demand wake & auto-sleep. Terraform-provisioned EC2, serverless control plane (Lambda + API Gateway), GitHub Actions CI/CD, Helm deploy, and optional Pr…

    HCL 2

  6. CI-CD-Pipeline-for-Application-Deployment CI-CD-Pipeline-for-Application-Deployment Public

    Production-grade CI/CD pipeline for deploying a web app on AWS with Terraform, GitHub Actions (OIDC), Lambda wake/sleep logic, CloudWatch dashboards, and cost-optimized EC2.

    HCL 3