An intelligent resume analysis platform that provides personalized, ATS-aligned feedback to help you land your dream job. Get detailed insights on your resume's compatibility, content quality, structure, and actionable improvements tailored to specific job postings.
- β AI-Powered Resume Analysis - Upload your PDF resume and receive comprehensive feedback across multiple categories (ATS compatibility, tone & style, content, structure, skills)
- β Job-Specific Tailoring - Import job postings from URLs or paste descriptions to get targeted feedback for specific roles
- β Visual Resume Previews - View preview images of your uploaded resumes alongside analysis results
- β
Detailed Feedback Categories - Get scored feedback with actionable tips for:
- ATS compatibility
- Tone and style
- Content quality
- Document structure
- Skills alignment
- Line-by-line improvements with suggested rewrites
- β Resume Management - Track all your analyses in a dashboard, delete individual resumes, or wipe all data
- β Secure Authentication - Google OAuth integration with Better Auth
- β Rate Limiting - Built-in protection against abuse (2 analyses/min, 5 job imports/min)
- Framework: Next.js 16 App Router with TypeScript
- Runtime: Bun (for development and builds)
- Authentication: Better Auth with Google OAuth
- Database: Neon PostgreSQL with Prisma ORM
- AI: Cerebras AI for resume analysis and job data extraction
- PDF Processing: External PDF service for markdown conversion and preview generation
- Job Import: Defuddle for web content extraction
- Styling: Tailwind CSS v4 with custom animations
- File Upload: React Dropzone
- UI Components: Lucide React icons, Sonner toasts
- Bun v1.0 or higher
- A Neon PostgreSQL database (sign up at neon.tech)
- Google OAuth credentials (from Google Cloud Console)
- Cerebras API key (from Cerebras Cloud)
- PDF service endpoint (for PDF to markdown conversion and preview generation)
- A modern web browser
- Clone the repository:
git clone https://github.com/prathamdby/ai-resume-analyzer.git
cd ai-resume-analyzer- Install dependencies:
bun install- Set up environment variables:
Create a .env file in the root directory with the following variables:
# Database Configuration
# PostgreSQL pooled connection for queries (from Neon dashboard)
DATABASE_URL="postgresql://user:password@host/database?sslmode=require&pgbouncer=true"
# Direct database connection for migrations (from Neon dashboard - without pgbouncer)
DIRECT_DATABASE_URL="postgresql://user:password@host/database?sslmode=require"
# Better Auth Configuration
BETTER_AUTH_SECRET="<generate-random-32-char-string>"
BETTER_AUTH_URL="http://localhost:3000"
NEXT_PUBLIC_BETTER_AUTH_URL="http://localhost:3000"
# Google OAuth Credentials
GOOGLE_CLIENT_ID="<your-google-oauth-client-id>"
GOOGLE_CLIENT_SECRET="<your-google-oauth-client-secret>"
# Cerebras AI Configuration
CEREBRAS_API_KEY="<your-cerebras-api-key>"
# PDF Service Configuration
PDF_SERVICE_URL="http://localhost:8000" # URL of the Python FastAPI service for PDF to markdown conversion
# Rate Limiting (optional)
DISABLE_RATE_LIMITING="false" # Set to "true" in development if neededImportant Notes:
- Get your Neon connection strings from the Neon dashboard. Use the pooled connection URL for
DATABASE_URLand the direct connection URL forDIRECT_DATABASE_URL. - Generate a random 32-character string for
BETTER_AUTH_SECRET(e.g., usingopenssl rand -base64 32). - For Google OAuth setup:
- Go to Google Cloud Console
- Create a new project or select an existing one
- Enable the Google+ API
- Create OAuth 2.0 credentials
- Add authorized redirect URI:
http://localhost:3000/api/auth/callback/google(for development) - Copy the Client ID and Client Secret to your
.envfile
- The PDF service should expose:
POST /convert- Accepts PDF file, returns{ markdown: string, preview_image?: string }GET /health- Health check endpoint
- Set up the database:
# Generate Prisma Client
bunx prisma generate
# Run migrations to create auth tables and rate limiting
bunx prisma migrate dev --name init_better_authNote:
- The job import feature uses rate limiting (5 requests per minute per user) to prevent abuse. Rate limits are stored in the database and persist across serverless invocations.
- Resume analysis uses rate limiting (2 requests per minute per user).
- The PDF service processes PDFs synchronously (one at a time). Under concurrent load, requests will queue and may timeout.
- Maximum file size is 20 MB. PDFs are converted to markdown (max 15K characters) before AI analysis.
- Start the development server:
bun run dev- Open your browser and navigate to
http://localhost:3000
You'll be redirected to /auth to sign in with Google.
Run TypeScript type checking:
bun run typecheck- Sign In (
/auth) - Authenticate with Google OAuth (required to access other pages) - Upload & Analyze (
/upload) - Upload your PDF resume and provide job details:- Enter job title and description (required)
- Optionally import from a job posting URL or paste company name
- Upload PDF resume (max 20 MB)
- Receive comprehensive AI feedback within seconds
- Dashboard (
/) - View all your resume analyses:- See preview images, job titles, and companies
- Click any resume card to view detailed feedback
- Delete individual resumes or wipe all data
- Resume Detail (
/resume/:id) - View detailed analysis:- Overall score and category breakdowns
- ATS compatibility tips
- Tone, content, structure, and skills feedback
- Line-by-line improvement suggestions with rewrites
- Visual resume preview
/auth- Google OAuth sign-in page (public)/(home) - Dashboard with all resume analyses - Protected/upload- Upload form for new resume analysis - Protected/resume/:id- Detailed resume analysis view - Protected
All routes except /auth are protected and require authentication. Unauthenticated users are automatically redirected to /auth.
bun run buildbun run startThis serves the production build via Next.js on port 3000.
βββ app/
β βββ api/
β β βββ analyze/ # Resume analysis endpoint
β β βββ auth/ # Better Auth route handlers
β β βββ import-job/ # Job posting import endpoint
β β βββ resumes/ # Resume CRUD endpoints
β β βββ user/wipe/ # Bulk data deletion endpoint
β βββ components/ # Reusable UI components
β β βββ Accordion.tsx
β β βββ AnalysisSection.tsx
β β βββ ATS.tsx
β β βββ FileUploader.tsx
β β βββ ResumeCard.tsx
β β βββ ...
β βββ resume/[id]/ # Resume detail page (dynamic route)
β βββ upload/ # Upload page
β βββ auth/ # Auth page
β βββ layout.tsx # Root layout component
β βββ page.tsx # Home/dashboard page
β βββ globals.css # Global styles
βββ lib/
β βββ ai.ts # Cerebras AI client configuration
β βββ api.ts # API client utilities
β βββ auth.ts # Better Auth server configuration
β βββ auth-client.ts # Better Auth client utilities
β βββ auth-server.ts # Server-side auth helpers
β βββ prisma.ts # Prisma client singleton
β βββ rate-limit.ts # Rate limiting implementation
β βββ schemas.ts # Zod validation schemas
βββ constants/
β βββ index.ts # AI prompts and response format schemas
βββ types/
β βββ index.d.ts # Application type definitions
βββ prisma/
β βββ schema.prisma # Prisma schema with Better Auth models
β βββ migrations/ # Database migration history
βββ public/ # Static assets (icons, images)
Contributions are welcome! Please follow these steps:
- Fork the repository
- Create a feature branch:
git checkout -b feature/new-feature - Install dependencies:
bun install - Make your changes following the repository guidelines (see
AGENTS.md) - Run type checking:
bun run typecheck - Test manually:
bun run dev - Commit your changes with a clear message:
git commit -m 'Add new feature' - Push to the branch:
git push origin feature/new-feature - Submit a pull request with:
- Clear problem statement
- Screenshots/recordings for UI changes
- Notes on testing performed
- Use Bun for all package management and scripts
- Follow TypeScript strict mode conventions
- Use Tailwind CSS utility classes (2-space indentation)
- Keep pages modular: page logic in
app/, shared UI inapp/components/ - Maintain type safety: all PRs must pass
bun run typecheck - See
AGENTS.mdfor detailed architecture patterns and contributor onboarding
For development:
bunx prisma migrate dev --name <migration-name>For production:
bunx prisma migrate deployImportant: Always use migrations (migrate dev/migrate deploy) instead of prisma db push for production safety.
This project is available under the MIT License.