Animark-AI is an AI-powered video generation system that creates high-conversion short-form advertisements using Anime-style and cinematic animation techniques.
Unlike generic text-to-video tools, Animark-AI is built specifically for advertising, optimizing for:
- Hooks
- Visual consistency
- Brand recall
- Social-media performance
🎯 Our mission: Make anime-quality video ads accessible to startups, creators, and small brands — for free, locally, and at scale.
- High-quality video ads are expensive
- Anime & animated ads require skilled artists
- Iteration is slow and costly
- Small brands can’t compete visually
- Automating ad creation end-to-end
- Using anime & cinematic styles proven to boost engagement
- Enabling rapid A/B testing with AI-generated variants
- Running locally with consumer GPUs
- LLM-powered marketing copy
- Hook-first ad structure
- Scene-by-scene storyboard generation
Dedicated pipelines for:
- Anime Ads
- Cel shading
- High-energy motion
- Expressive frames
- Cinematic / 3D Ads
- Clean product focus
- Dramatic lighting
- Studio-quality visuals
- SDXL image generation
- AnimateDiff motion synthesis
- ControlNet + IP-Adapter for consistency
- Free local TTS (Edge-TTS / XTTS)
- Automatic subtitle generation
- Audio–video sync
- Scene stitching
- Beat-synced transitions
- Zoom & motion effects
- Logo and CTA overlays (FFmpeg)
- Language: Python 3.10+
- Frameworks: PyTorch, Diffusers
- Image: SDXL
- Motion: AnimateDiff
- Consistency: ControlNet, IP-Adapter
- LLMs: GPT-4o / Claude / Llama 3 (Groq)
- Edge-TTS
- XTTS v2
- Whisper / Faster-Whisper
- FFmpeg
- MoviePy
- Streamlit / Gradio
Animark-AI is designed to run on consumer hardware.
| Component | Recommended |
|---|---|
| GPU | RTX 4060 (8GB VRAM) |
| RAM | 16GB+ |
| OS | Windows / Linux |
| CUDA | 12.x |
⚡ No cloud required. No paid APIs required for core pipeline.
Animark-AI/
│
├── README.md # Product-facing overview (FIRST IMPRESSION)
├── WEBSITE.md # Landing page / marketing copy
├── DEMO.md # Demo videos, GIFs, walkthrough
│
├── docs/ # SYSTEM & ENGINEERING
│ ├── HLD.md # High-Level Design
│ ├── LLD.md # Low-Level Design
│ ├── ARCHITECTURE.md # Component & deployment architecture
│ ├── PIPELINE.md # End-to-end generation pipeline
│ ├── DATA_REPORTS.md # Latency, VRAM, quality metrics
│ ├── EXPERIMENTS.md # Ablations, prompt tests, failures
│ ├── BENCHMARKS.md # Comparisons vs other tools
│ ├── SECURITY.md # Abuse prevention, watermarking
│ └── ETHICS.md # Copyright, deepfake safety
│
├── research/ # SCIENTIFIC THINKING
│ ├── related_work.md # AnimateDiff, SVD, ControlNet papers
│ ├── papers.md # Paper summaries & links
│ └── findings.md # Your insights & lessons learned
│
├── product/ # FOUNDER MODE
│ ├── roadmap.md # 30 / 90 / 365 day plan
│ ├── monetization.md # Business model
│ ├── user_personas.md # Creators, startups, agencies
│ ├── go_to_market.md # Distribution & growth
│ └── pricing_future.md # Optional paid tiers (later)
│
├── src/ # CORE CODE
│ └── animark_ai/
│ ├── __init__.py
│ │
│ ├── domain/ # Core business concepts
│ │ ├── ad.py
│ │ ├── scene.py
│ │ ├── storyboard.py
│ │ ├── style.py
│ │ └── enums.py
│ │
│ ├── agents/ # LLM agents
│ │ ├── script_agent.py
│ │ ├── storyboard_agent.py
│ │ ├── hook_agent.py
│ │ └── prompt_agent.py
│ │
│ ├── generation/ # Visual generation
│ │ ├── image_gen.py # SDXL
│ │ ├── motion_gen.py # AnimateDiff
│ │ ├── consistency.py # ControlNet, IP-Adapter
│ │ └── styles.py # Anime / cinematic configs
│ │
│ ├── audio/ # Voice & sound
│ │ ├── tts.py
│ │ ├── sfx.py
│ │ └── captions.py
│ │
│ ├── video/ # Editing & rendering
│ │ ├── editor.py
│ │ ├── transitions.py
│ │ └── exporter.py
│ │
│ ├── evaluation/ # Ad quality evaluation
│ │ ├── hook_score.py
│ │ ├── visual_score.py
│ │ └── engagement_proxy.py
│ │
│ ├── llm/ # LLM interfaces
│ │ ├── prompts/
│ │ │ ├── marketing.py
│ │ │ ├── storytelling.py
│ │ │ └── ad_copy.py
│ │ ├── inference.py
│ │ └── router.py
│ │
│ └── utils/
│ ├── logger.py
│ ├── config.py
│ └── gpu_utils.py
│
├── api/ # BACKEND
│ ├── main.py # FastAPI app
│ ├── routes/
│ │ ├── generate.py
│ │ ├── preview.py
│ │ └── health.py
│ ├── schemas/
│ │ └── request_response.py
│ └── deps.py
│
├── ui/ # FRONTEND
│ └── streamlit_app.py
│
├── assets/ # VISUAL ASSETS
│ ├── images/
│ ├── videos/
│ ├── diagrams/
│ └── charts/
│
├── tests/ # TESTING
│ ├── agents/
│ ├── generation/
│ ├── audio/
│ ├── video/
│ └── api/
│
├── docker/ # DEPLOYMENT
│ └── Dockerfile
│
├── requirements.txt
├── pyproject.toml # (optional, future-proofing)
├── .env.example
└── .gitignore
- Python 3.10+
- NVIDIA GPU (CUDA enabled)
- FFmpeg installed and added to PATH
git clone https://github.com/yourusername/Animark-AI.git
cd Animark-AI
python -m venv venv
source venv/bin/activate # Linux / Mac
venv\Scripts\activate # Windows
pip install -r requirements.txtCreate a .env file:
OPENAI_API_KEY=your_key_here
HF_TOKEN=your_huggingface_token(Optional — Animark-AI can run fully local.)
python main.py \
--product "Energy Drink" \
--style anime \
--duration 15streamlit run ui/app.py- Script → Image → Video
- Anime Ad MVP
- ControlNet (Depth, OpenPose)
- IP-Adapter for characters & products
- Context-aware sound effects
- Beat-synced transitions
- Hook quality scoring
- Multi-variant ad generation (A/B)
- Brand memory (colors, logos)
- Aspect ratio export (9:16, 1:1, 16:9)
- 📐 High-Level Design (HLD)
- 🔩 Low-Level Design (LLD)
- 🧪 Experiments & Benchmarks
- 📄 Research References
- 📈 Monetization Strategy
(See /docs and /research folders)
Animark-AI is free & open-source, with optional future offerings:
- Hosted inference
- API access
- Agency plans
- Brand automation tools
Contributions are welcome!
- Research improvements
- Performance optimizations
- New styles
- UI/UX enhancements
MIT License — free to use, modify, and distribute.
Animark-AI aims to become the open-source standard for anime-powered video advertising, enabling anyone to create studio-quality ads without cost or complexity.
If you like this project, ⭐ star the repo and join the journey.