| Version | Supported |
|---|---|
| 1.3.x | β Yes |
| 1.2.x | β Yes |
| 1.1.x | β Yes |
| 1.0.x | β Yes |
We take security seriously! π If you discover a security vulnerability in this project, please report it responsibly.
- DO NOT open a public GitHub issue for security vulnerabilities
- Instead, email us at: security@dubsopenhub.com
- Or use GitHub's private vulnerability reporting
Please provide as much of the following as possible:
- π Description of the vulnerability
- π Steps to reproduce
- π₯ Potential impact
- π‘ Suggested fix (if you have one)
- β±οΈ Acknowledgment within 48 hours
- π Assessment within 1 week
- π οΈ Fix or mitigation as quickly as possible
- π Credit in the release notes (unless you prefer anonymity)
This repository has the following GitHub security features configured:
| Feature | Status | Notes |
|---|---|---|
| β Dependabot Alerts | Enabled | Monitors dependencies for known vulnerabilities |
| β Dependabot Security Updates | Enabled | Auto-creates PRs to fix vulnerable dependencies |
| β Secret Scanning | Enabled | Detects accidentally committed secrets |
| β Secret Scanning Push Protection | Enabled | Blocks pushes containing secrets |
| β Code Scanning (CodeQL) | Available | Static analysis for security bugs |
Since this is a Copilot CLI skill (no runtime code, only markdown instructions), the primary security considerations are:
- π No secrets in skill files - SKILL.md and agent.md should never contain API keys, tokens, or credentials
- π Safe instructions - Skill instructions should never instruct the agent to bypass security controls
- π Dependency awareness - If dependencies are added in the future, keep them updated
Since this skill orchestrates multiple AI models and processes user-provided task descriptions, prompt injection is a relevant concern:
- π Sealed judging - Judge models receive anonymized submissions with model fingerprints stripped, reducing the attack surface for identity-based manipulation
- π§Ή Input sanitization - The SKILL.md includes anti-gaming protections: calibration anchors, keyword stuffing detection, test tampering scans, and prompt injection scans
- π« No credential passthrough - User input is used as task descriptions only; it is never interpolated into system-level commands or used to access external services
- βοΈ Consensus scoring - Even if one judge model is influenced by injected content, the median-of-3 consensus mechanism limits the impact on final scores
This project is licensed under the MIT License.