Skip to content
View Sarthakx67's full-sized avatar

Highlights

  • Pro

Block or report Sarthakx67

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Sarthakx67/README.md
Sarthak Singh Typing SVG



LinkedIn GitHub followers Profile Views


$ whoami

┌──────────────────────────────────────────────────────────────────┐
│                                                                  │
│   name:        Sarthak Singh                                     │
│   role:        Cloud & DevOps Engineering Student                │
│   degree:      BTech Computer Engineering                        │
│   location:    India                                             │
│                                                                  │
│   philosophy: >                                                  │
│     "I don't want to know how to use the tools.                  │
│      I want to know what happens when they fail."                │
│                                                                  │
│   journey:     VM Infrastructure → Kubernetes → Production CI/CD │
│   goal:        Cloud / DevOps Architect. Systems-focused.        │
│   status:      Open to DevOps / Cloud Engineering Internship     │
│                                                                  │
└──────────────────────────────────────────────────────────────────┘

🗺️ My Engineering Journey

Everything below was built in sequence. Each project taught me something the previous one couldn't.

PHASE 1 — Learn infrastructure the hard way
    RoboShop on EC2 + ALB  →  15 Terraform modules, VMs, ASGs, security groups

PHASE 2 — Build reusable, modular IaC
    Terraform VPC Module  →  Reusable across projects, dual testing/prod mode

PHASE 3 — Move to container orchestration
    Kubernetes Lab  →  Every resource type, every concept, hands-on manifests

PHASE 4 — Production Kubernetes
    Retail Store on EKS  →  Helm umbrella charts, IRSA, HPA, StatefulSets, observability

PHASE 5 — Automate everything
    Jenkins Shared Library  →  One pipeline to rule all microservices, all languages

🏗️ Projects


🔴 RoboShop — VM-Based Multi-Environment AWS Infrastructure

Repository Description
📁 RoboShop-Infra-Standard Main infrastructure — Terraform modules, Jenkins CI/CD
🐚 Roboshop-Dev-Prod-Infra-CICD-Deployment Infra CICD Deployment
📦 Terraform-AWS-VPC-Advanced Reusable VPC module consumed by this project
🛡️ RoboShop-Security-Group-Module Reusable security group module
⚙️ RoboShop-Ansible-Roles-tf Ansible roles for configuration management
🐚 RoboShop-Shell-Script-For-Alma-Linux Bootstrap shell scripts

The foundation. Before touching Kubernetes, I built the entire platform the hard way — EC2, ALBs, ASGs, Ansible — across DEV and PROD using 15 separate Terraform modules. This is where I learned why infrastructure needs to be code, not clicks.

Infrastructure (15 Terraform Modules)

  • Custom VPC module across 2 AZs — public, private, database subnets
  • 13 security groups — least-privilege rules, service-to-service on exact ports, VPN-only SSH
  • Remote S3 backend + DynamoDB state locking across all modules
  • SSM Parameter Store for cross-module data passing — zero hardcoded values

Compute & Deployment

  • AMI baking pipeline — EC2 → Ansible configure → stop → bake AMI → delete → Launch Template → ASG
  • Auto Scaling Groups with 50% CPU target tracking policy
  • Internal ALB with host-header routing to each microservice
  • External ALB + Route53 on custom domain stallions.space

Databases & Services

  • MongoDB, Redis, MySQL, RabbitMQ — each isolated in database subnet
  • Separate security group per database — only the correct service can reach it
  • DNS records per service via Route53

Configuration Management

  • Ansible via ansible-pull model — instances pull their own config at boot
  • No push model, no manual SSH, no configuration drift

CI Pipeline (Catalogue Service)

  • Jenkins pipeline — version detection from package.json, npm build
  • SonarQube code quality scan, SAST security scan
  • Nexus artifact publish, downstream CD trigger with version parameter
  • Built for catalogue as proof of concept — full multi-service CI/CD in Retail Store below

Stack: Terraform Ansible Jenkins AWS EC2/ALB/ASG/Route53/SSM AlmaLinux


🟢 AWS Retail Store — EKS Production Deployment

Repository Description
📁 retail-store-aws-deployment Main EKS deployment — Helm umbrella charts, IRSA, observability
🔄 retail-store-Jenkins-shared-library Jenkins shared library — multi-language CI/CD for all services

The flagship project. Everything learned building RoboShop on VMs, rebuilt properly on Kubernetes — with full observability, zero static credentials, and a CI/CD pipeline that handles every microservice in every language from a single shared library.

Helm Architecture

  • Umbrella chart with 10 subcharts — cart, catalog, checkout, orders, ui, mysql, postgresql, rabbitmq, redis, dynamodb
  • Environment-specific values files — dev uses local DynamoDB + static creds, prod uses AWS DynamoDB + IRSA
  • _helpers.tpl for health probe templates — defined once, used across all services
  • StorageClass defined inside the chart — self-contained, no cluster pre-setup needed

Security — IRSA Fully Implemented

  • OIDC provider enabled on EKS cluster
  • Trust policy with exact system:serviceaccount condition — scoped to one service account
  • IAM policy scoped to specific DynamoDB table ARN — not account-wide
  • ServiceAccount annotated with eks.amazonaws.com/role-arn
  • Zero static AWS credentials anywhere in the cluster
  • Verified by checking AWS_ROLE_ARN + AWS_WEB_IDENTITY_TOKEN_FILE in pod env

Kubernetes Architecture

  • StatefulSets with EBS dynamic provisioning for MySQL and PostgreSQL
  • volumeClaimTemplates — each replica gets its own EBS volume, not a shared PVC
  • HPA on all 5 application services at 70% CPU target
  • Health probes (startup + liveness + readiness) — reduced deploy time 12 min → 1.5 min
  • EBS CSI driver + gp3 StorageClass with WaitForFirstConsumer

CI/CD — Jenkins Shared Library

  • detectVersion() — reads pom.xml for Java, main.go for Go, package.json for Node
  • dockerBuildPush() — builds image, pushes to DockerHub with version tag
  • deployK8s() — helm upgrade with environment-specific values file
  • One shared library handles all 5 services across 3 languages

Observability

  • Prometheus kube-prometheus-stack with ServiceMonitor per service
  • Grafana dashboards — 5xx error rate, CPU, JVM memory, requests/second
  • Health endpoints: /actuator/health/liveness, /actuator/health/readiness, /health, /metrics

Stack: AWS EKS Helm Jenkins Shared Library IRSA Prometheus Grafana EBS CSI Docker Terraform


🛠️ Full Tech Stack

Layer Technologies
☁️ Cloud AWS EKS · VPC · IAM · ALB · ASG · EC2 · EBS · EFS · DynamoDB · Route53 · SSM · S3
🐳 Containers Kubernetes Docker Helm Umbrella Charts · IRSA · StatefulSets · HPA
🏗️ IaC Terraform Reusable Modules · Remote State · SSM Integration · Dual-mode VPC
⚙️ Config Mgmt Ansible ansible-pull · Roles · Zero push model
🔄 CI/CD Jenkins Shared Libraries · Nexus · SonarQube · Multi-language version detection
📊 Observability Prometheus Grafana ServiceMonitors · Custom Dashboards · PromQL
🐧 OS & Scripting Linux Bash AlmaLinux · Ubuntu
💻 Languages Java Go · Node.js · Python (reading/debugging)
🔁 VCS Git GitHub

📊 GitHub Stats



📌 Honest Status

PRODUCTION-READY ───────────────────────────────────────────────────────

  ✅  Multi-env VM infrastructure — 15 Terraform modules, full AWS stack
  ✅  Reusable Terraform VPC module — dual mode, region-agnostic, validated
  ✅  EKS production deployment — Helm umbrella, 10 subcharts, multi-env values
  ✅  IRSA — full OIDC chain, scoped IAM, zero static credentials
  ✅  Jenkins shared library — multi-language, multi-service, single codebase
  ✅  StatefulSets + EBS dynamic provisioning — MySQL and PostgreSQL
  ✅  HPA + health probes — 12 min → 1.5 min deployment time
  ✅  Prometheus + Grafana — ServiceMonitors, custom dashboards

ACTIVELY LEARNING ──────────────────────────────────────────────────────

  🔄  Linux internals — cgroups, namespaces, process model, OOM killer
  🔄  Networking — TCP, DNS resolution chain, k8s packet path through CNI
  🔄  PromQL + AlertManager — from dashboards to actual SLOs and alert rules

NEXT BUILDS ────────────────────────────────────────────────────────────

  ⏳  External Secrets Operator — replace plaintext passwords with ESO + IRSA
  ⏳  NetworkPolicies — pod-level security mirroring existing SG rules
  ⏳  ArgoCD — pull-based deployment replacing Jenkins CD
  ⏳  Terraform CI — tflint + checkov + GitHub Actions on every PR
  ⏳  Trivy image scanning in Jenkins pipeline

💡 Why My Work Is Different

Most students who "know Kubernetes" have run kubectl apply on a tutorial manifest. Here is what I have actually done differently.

On IRSA: I did not copy-paste a blog post. I enabled the OIDC provider, wrote the trust policy with the service account condition, scoped the IAM policy to a specific table ARN, annotated the ServiceAccount, and verified it by exec-ing into the pod and checking env | grep AWS. I know it works because I know why it works.

On the VPC module: I wrote a module that other modules consume. It has input validation, dual deployment modes, and is region-agnostic. When I call it from RoboShop and call it again from a new project, I do not copy-paste Terraform — I reference the same tested module.

On the Jenkins shared library: I did not write a Jenkinsfile per service. I wrote three reusable Groovy functions that handle version detection for Maven, Go, and Node — then one pipeline definition that every service calls. When the deployment process changes, I change it once.

On health probes: I did not add them because a tutorial said to. I added them, measured the deploy time, watched it drop from 12 minutes to 1.5 minutes, and understood why. The startup probe is what made the difference — Kubernetes was killing pods before they finished connecting to databases.



BTech Student · Building production infrastructure · Seeking DevOps Internship


LinkedIn

Email

Phone

Pinned Loading

  1. retail-store-aws-deployment retail-store-aws-deployment Public

    Production-grade microservices platform on AWS EKS — built with Terraform, Jenkins CI/CD, Helm, and Prometheus/Grafana monitoring. A complete DevOps portfolio project.

    Java 8 12

  2. RoboShop-Infra-Standard RoboShop-Infra-Standard Public

    Terraform IaC for deploying RoboShop e-commerce on AWS — 15 modular components including VPC, Auto Scaling, ALB, 4 databases, 5 microservices, and SSL-terminated Nginx frontend.

    HCL

  3. retail-jenkins-lib retail-jenkins-lib Public

    Jenkins Shared Library for automating microservice CI/CD on Kubernetes — version detection, Docker builds, and Helm deployments in a single pipeline call.

    Groovy

  4. Roboshop-Dev-Prod-infra-cicd-Deployment Roboshop-Dev-Prod-infra-cicd-Deployment Public

    HCL

  5. helm-charts helm-charts Public

    Helm chart for deploying Nginx on Kubernetes — 5-replica Deployment with LoadBalancer service, templated values, and a full beginner-friendly breakdown of every file and concept.

  6. Terraform-AWS-VPC-Advanced Terraform-AWS-VPC-Advanced Public

    Reusable Terraform module for production-ready AWS VPC — multi-AZ public, private, and database subnets with dual testing/production modes, NAT Gateway, and optional VPC peering.

    HCL 1