v3.0 represents a fundamental shift — from a static portfolio to an Autonomous Technical Twin. Built on an enterprise-grade speed stack, it features a proactive AI agent that conducts real-time technical screenings, explains architectural decisions, and surfaces direct evidence from my GitHub, LinkedIn, and live production environments.
- Proactive AI Technical Twin: A bi-directional agent powered by Groq (Llama 3.3) and WebSockets. It monitors user intent and proactively offers contextual deep-dives — e.g. "I see you're looking at GradeHub; want to know how I handled multi-tenant RBAC?"
- Enterprise RAG Pipeline: A custom Retrieval-Augmented Generation engine backed by Neon (PostgreSQL + pgvector), performing semantic search across my entire professional history for 100% factual accuracy.
- Automated Knowledge Ingestion: A modular ETL pipeline that auto-syncs from:
- GitHub — real-time README scraping and technical metadata extraction
- LinkedIn — standardised career history via PDF-parsing
- Live Web — recursive crawling of production sites for brand consistency
- Local Inference Engine: High-performance vector embeddings generated on-server with Xenova Transformers (
all-MiniLM-L6-v2) — zero API cost, zero-latency vectorisation. - The Shield (Rate Limiting): Context-aware throttling via NestJS Throttler protecting both HTTP and WebSocket layers from bot spam.
| Layer | Technology |
|---|---|
| Framework | Next.js 15+ (App Router) |
| Real-time UI | Socket.io-client & Framer Motion |
| Rich Media | React Markdown with GFM support |
| Design | Tailwind CSS & shadcn/ui |
| Forms | React Hook Form & Zod |
| Resend & React Email | |
| Icons | Lucide React |
| Notifications | Sonner |
| Layer | Technology |
|---|---|
| Framework | NestJS (Modular Architecture) |
| AI Inference | Groq Cloud LPU — Llama 3.3 70B |
| Vector DB | Neon with pgvector |
| ORM | Drizzle ORM (type-safe migrations) |
| Ingestion | Playwright (headless scraping) & pdf-parse |
| Layer | Technology |
|---|---|
| Containerisation | Docker (multi-stage builds) |
| Hosting | AWS EC2 (t3.micro) + Nginx reverse proxy |
| CI/CD | GitHub Actions — automated deployment & model pre-caching |
| Frontend Deploy | Vercel |
steve-portfolio-v3/
├── client/ # Next.js Frontend → Vercel
│ ├── src/hooks/ # Custom AI & Socket hooks
│ └── src/services/ # Singleton Socket management
├── server/ # NestJS Backend → AWS EC2
│ ├── src/modules/ # Modular AI, Vector, and Chat domains
│ ├── src/scripts/ # Knowledge seeding and ETL scripts
│ └── Dockerfile # High-performance production image
└── infra/ # Nginx & SSL configurations
This is a monorepo managed with pnpm.
1. Clone the repository:
git clone https://github.com/Arnoldsteve/Portfolio.git
cd steve-portfolio-v32. Server setup:
cd server
pnpm install
# Add GROQ_API_KEY and DATABASE_URL to .env
pnpm db:push # Run Drizzle migrations
pnpm seed # Initialise AI memory
pnpm start:dev3. Client setup:
cd client
pnpm install
# Add RESEND_API_KEY to .env.local
pnpm devOpen http://localhost:3000 to see the result.
- Frontend: The
client/directory deploys automatically to Vercel on every push tomain. - Backend: The
server/directory is containerised via Docker and deployed to AWS EC2 through a GitHub Actions CI/CD pipeline.
This project is licensed under the MIT License. See the LICENSE file for details.
This project follows Conventional Commits and SOLID design principles. For an interactive technical deep-dive, talk to the AI Twin on the live site.
Steve Arnold Otieno — Solutions Architect & Full-Stack Engineer
- Email: stevearnold9e@gmail.com
- LinkedIn: linkedin.com/in/steve-arnold-otieno
