Minimal Slack AI chatbot built with FastAPI and LLMs, deployable on Google Cloud Run with automated CI/CD via GitHub Actions.
This project provides a clean, production-ready foundation for building Slack chatbots powered by large language models.
It demonstrates:
- Real-time Slack event handling
- LLM-based response generation
- Containerized deployment on Cloud Run
- Automated deployment using GitHub Actions
Slack -> FastAPI -> LLM -> Slack response
.
├── app.py
├── Dockerfile
├── requirements.txt
└── src/
├── llm.py
└── slack_handler.py
git clone https://github.com/your-username/slack-llm-bot.git
cd slack-llm-bot
SLACK_BOT_TOKEN=your-token
SLACK_SIGNING_SECRET=your-secret
OPENAI_API_KEY=your-key
uvicorn app:api --reload --port 3000
Build and deploy manually:
gcloud builds submit --tag gcr.io/YOUR_PROJECT/slack-llm-bot
gcloud run deploy slack-llm-bot --image gcr.io/YOUR_PROJECT/slack-llm-bot --region europe-west1 --platform managed --allow-unauthenticated --set-env-vars SLACK_BOT_TOKEN=xxx,SLACK_SIGNING_SECRET=xxx,OPENAI_API_KEY=xxx
This project uses GitHub Actions to automatically deploy to Google Cloud Run on push to main, withg the deployment pipeline:
- Trigger: push to
main - Build Docker image
- Push to Google Container Registry
- Deploy to Cloud Run
- Inject environment variables securely via GitHub Secrets
Required GitHub secrets:
GCP_PROJECT_ID
GCP_REGION
GCP_SA_KEY
SERVICE_NAME
SLACK_BOT_TOKEN
SLACK_SIGNING_SECRET
OPENAI_API_KEY
Invite the bot to a channel:
/invite @Slack LLM Bot
Send a message:
@Slack LLM Bot hello
- Slack retries events if responses exceed ~3 seconds
- Duplicate event handling should be implemented for production
- Cloud Run cold starts may introduce latency
MIT