A deep-learning web application for deepfake video detection, built with Django, TensorFlow/Keras (Xception), and OpenCV. The system was developed as a Final Year Project at Bahria University.
Users upload an MP4 clip; the backend checks for visible faces, extracts face crops from sampled frames, runs each crop through a fine-tuned Xception classifier, and shows an aggregate verdict with per-frame confidence and charts on the results page.
Landing / upload experience
Detection results (charts and verdict)
| Area | Description |
|---|---|
| Web app | Django project deepfake_detector, app app, templates under templates/ |
| Inference | app/views.py — face detection (Haar cascade), frame capture, duplicate filtering (ImageHash), Xception preprocessing and prediction |
| Model weights | Downloaded at runtime from Hugging Face Hub (not stored in git) — see Model |
| Database | SQLite by default (db.sqlite3); models for newsletter emails and feedback |
| Deployment | Dockerfile + Gunicorn; WhiteNoise for static files; suitable for Railway or similar platforms |
Standalone training notebooks/scripts are not included in this repo; the shipped code focuses on inference and the web UI. Training was done separately; the resulting .h5 weights are published on Hugging Face for use by this app.
- Xception-based detection — Face crops resized to 224×224, Xception
preprocess_input, binary output interpreted so higher score → fake (seeevaluate_framesinapp/views.py). - Face-aware pipeline — Rejects videos with no detectable faces (quick scan across sampled frames).
- Frame extraction — Periodic sampling (
frame_skip = 20), minimum face size and brightness filters, perceptual hashing to drop near-duplicates. - Results UI — Overall label, average confidence, real/fake counts, ApexCharts series for exploration; optional export helpers (PDF/Excel libraries loaded in
templates/results.html). - Newsletter — Email collection via
Emailmodel andemail_submissionAPI. - Feedback — Star rating + comment + page URL + IP, stored via
Feedbackandsubmit_feedback. - Cancel processing —
POST/GETtocancel_processing/sets a global flag so long jobs can be aborted between stages. - Resource-conscious inference — TensorFlow configured for CPU, limited threading, and memory cleanup helpers to behave better on constrained hosts.
| Layer | Technology |
|---|---|
| Framework | Django 3.2 |
| Deep learning | TensorFlow 2.10, Keras, Xception |
| Vision | OpenCV (Haar frontal-face cascade), Pillow, ImageHash |
| Model delivery | huggingface_hub (hf_hub_download) |
| Server | Gunicorn, WhiteNoise |
| Frontend | HTML, Tailwind (CDN on results page), ApexCharts, Font Awesome |
upload_video(app/views.py) accepts MP4 only (VideoUploadForm/clean_videoinapp/forms.py).- Upload is saved under
MEDIA_ROOT; previousmedia/tree is cleared for a clean run. check_faces_in_videoscans up to ~10 spread-out frames; if no face → JSON errorNo faces detected.FrameCapturereads the video, every 20th frame runs Haar detection, saves face crops undermedia/frames/, thenremove_non_face_and_duplicate_framesdeduplicates.evaluate_framesloads each image with Kerasimage.load_img(..., target_size=(224, 224)), applies Xception preprocessing,model.predict, aggregates mean of the fake probability across frames, and decides Real vs Fake (threshold 0.5 on that mean).- Results are rendered in
templates/results.htmlwith chart series passed as JSON.
Model loading (once at import):
def load_model_from_hf():
model_path = hf_hub_download(
repo_id="abdulrehman77/deepfakedetection",
filename="XSoftmax- 1st high P.h5",
cache_dir=os.path.join(settings.BASE_DIR, 'models')
)
return tf.keras.models.load_model(model_path, compile=False)
model = load_model_from_hf()- Architecture: Xception (ImageNet-style input, fine-tuned for binary real/fake).
- Training data: Described in project documentation as aligned with the Deepfake Detection Challenge (DFDC) style real vs. fake video data.
- Weights: Hosted on Hugging Face: repo
abdulrehman77/deepfakedetection, fileXSoftmax- 1st high P.h5. The app downloads this file on first run intomodels/(cache). - Validation performance (reported): on the order of ~94% accuracy on the held-out validation setup used for the FYP (exact split and metric definitions follow the training notebook — not bundled here).
Training code: Training notebooks or scripts are not part of this repository. To retrain, you would typically:
- Prepare face crops or frames from real and fake sources (e.g. DFDC or similar).
- Fine-tune Xception with binary cross-entropy (or equivalent), match input size 224×224 and Xception preprocessing.
- Export
.h5and either place it locally or upload to Hugging Face and pointhf_hub_downloadat your file.
deepfakeDetection-main/
├── app/
│ ├── views.py # Inference pipeline, APIs (feedback, email, cancel)
│ ├── forms.py # VideoUploadForm (MP4), email form
│ ├── models.py # Email, Feedback
│ ├── urls.py # Routes for upload, cancel, feedback, newsletter
│ └── admin.py # Admin for Email & Feedback
├── deepfake_detector/
│ ├── settings.py # DB, static/media, WhiteNoise, upload limits
│ ├── urls.py # Root URLconf (admin + app)
│ └── wsgi.py # Gunicorn entry
├── templates/
│ ├── landing_page.html # Main upload UI
│ ├── results.html # Results + charts
│ ├── base.html, login.html, signup.html
├── media/ # Created at runtime (uploads, frames) — not for git
├── models/ # HF cache for downloaded .h5 — created at runtime
├── requirements.txt
├── Dockerfile
├── install.sh # pip helper (numpy first)
├── manage.py
└── README.md
- Python 3.9–3.10 recommended (matches TensorFlow 2.10.1 wheels on common platforms).
- FFmpeg is not invoked directly in code; OpenCV reads videos — ensure codecs your OS supports for MP4.
- Internet on first run to download weights from Hugging Face.
git clone https://github.com/coder-msk/DeepDeception---AI-Powered-Deepfake-Detection.git
cd DeepDeception---AI-Powered-Deepfake-Detectionpython -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activateEither:
chmod +x install.sh
./install.shor:
pip install --no-cache-dir numpy==1.26.4
pip install -r requirements.txtpython manage.py migrateOptional admin user:
python manage.py createsuperuserpython manage.py runserverOpen http://127.0.0.1:8000/ — upload an MP4 that contains clear faces.
After collectstatic (WhiteNoise serves from STATIC_ROOT):
python manage.py collectstatic --noinput
gunicorn deepfake_detector.wsgi:application --bind 0.0.0.0:8000For platforms that inject PORT (e.g. Railway), bind to 0.0.0.0:$PORT in your process manager or container entrypoint.
Build and run (adjust port as needed):
docker build -t deepdeception .
docker run -p 8080:8080 -e PORT=8080 deepdeceptionNote: The Dockerfile uses Gunicorn; ensure PORT matches your platform’s expectations. If $PORT is not expanded inside the container CMD, set the bind address in a way your host documents (shell form or explicit port).
| Method / path | Purpose |
|---|---|
GET / |
Upload page (upload_video) |
POST / |
Submit MP4 for analysis |
POST/GET /cancel_processing/ |
Cancel ongoing processing |
POST /api/feedback/ |
JSON feedback (rating, comment, page_url) |
POST /newsletter/submit-email/ |
JSON { "email": "..." } |
/admin/ |
Django admin (emails & feedback) |
DEBUG,SECRET_KEY, andALLOWED_HOSTSindeepfake_detector/settings.pyare suitable for development only. For production, use environment variables, a strong secret,DEBUG=False, and restricted hosts.- Upload limit: about 100 MB per
DATA_UPLOAD_MAX_MEMORY_SIZE/FILE_UPLOAD_MAX_MEMORY_SIZE. - TensorFlow is forced to CPU in
app/views.pyfor predictable behavior on shared servers.
- Muhammad Salman Khan
- AbdurRehman
- Maheen Sheikh
Final Year Project, Bahria University. Deepfake research and datasets such as DFDC informed the modeling choices; the Xception architecture is due to its authors via Keras Applications.