A production-ready MLOps pipeline demonstrating best practices for machine learning operations, from training to deployment and monitoring.
This project implements a complete MLOps workflow for a sentiment analysis model, showcasing:
- Automated CI/CD with GitHub Actions
- Model Registry with MLflow
- Containerized Training with Docker
- REST API for inference (FastAPI)
- Web Interface for interactive demos
- Model Monitoring and drift detection
mlops/
├── src/
│ ├── data/ # Data processing and loading utilities
│ ├── models/ # Model definitions and architectures
│ ├── training/ # Training scripts and pipelines
│ └── inference/ # API and inference code
├── tests/ # Unit and integration tests
├── docs/ # Documentation and architecture notes
├── scripts/ # Utility scripts (health checks, etc.)
├── .github/ # GitHub Actions workflows
└── requirements.txt # Python dependencies
Before running anything, verify your setup is correct:
Using Docker:
# Build the image
docker build -t mlops:latest .
# Run health check
docker run --rm mlops:latest python scripts/health_check.pyOr use the test script:
chmod +x scripts/test_docker.sh
./scripts/test_docker.shLocal (if you have Python installed):
python scripts/health_check.pySince this project is containerized, you can run everything without installing Python locally:
-
Build the Docker image:
docker build -t mlops:latest . -
Run training:
docker run --rm mlops:latest python src/training/train.py
-
Run the inference API:
docker run -p 8000:8000 mlops:latest python src/inference/app.py
If you have Python 3.10+ installed locally:
-
Create a virtual environment:
python3.10 -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies:
pip install -r requirements.txt
-
Run training:
python src/training/train.py
-
Start the inference API:
python src/inference/app.py
See the /docs directory for detailed documentation:
MIT