feat: dual FLM/Ollama backend, Docker MariaDB, external config, and benchmark suite#22
feat: dual FLM/Ollama backend, Docker MariaDB, external config, and benchmark suite#22Matcraft94 wants to merge 6 commits into
Conversation
- Add llm_backends.py with unified FLM (OpenAI-compatible) and Ollama support - Add docker-compose.yml for MariaDB and docker-compose.tools.yml for Parrot OS fallback - Migrate db.py and export.py to environment-based credentials - Add .env.example, config/system_prompt.txt, and config/initdb/01-schema.sql - Update tools.py with automatic Docker fallback for missing pentest tools - Refactor llm.py to use new backend and load system prompt from file - Update metatron.py banner to show active backend and model
There was a problem hiding this comment.
Pull request overview
This PR modernizes METATRON’s local runtime by adding a switchable FLM/Ollama LLM layer, containerizing supporting infrastructure (MariaDB + pentest tools fallback), and moving runtime configuration into external files/env vars.
Changes:
- Introduces a unified LLM backend interface (
llm_backends.py) and updatesllm.pyto use it, including more tolerant parsing and an externalized system prompt. - Adds Docker Compose stacks for MariaDB and a “tools” container fallback, plus a
start.shlauncher. - Adds an FLM benchmarking script + sample results and updates docs/config templates accordingly.
Reviewed changes
Copilot reviewed 17 out of 19 changed files in this pull request and generated 10 comments.
Show a summary per file
| File | Description |
|---|---|
tools.py |
Adds local tool execution with Docker fallback via docker exec metatron-tools. |
start.sh |
New environment launcher that starts Docker services, sets up venv/deps, and launches the CLI. |
requirements.txt |
Adds openai and python-dotenv dependencies for the new backend/config approach. |
metatron.py |
Updates DB connection failure guidance to include Docker startup instructions. |
llm_backends.py |
New unified FLM (OpenAI-compatible) + Ollama backend selector via env var. |
llm.py |
Switches to unified backend, loads system prompt from file, and expands parsers to handle more output formats. |
export.py |
Moves DB connection details to env vars (external config). |
docker-compose.yml |
New MariaDB container with init scripts and healthcheck. |
docker-compose.tools.yml |
New Parrot OS tools container to run missing pentest tools. |
db.py |
Moves DB connection details to env vars (external config). |
config/system_prompt.txt |
New external system prompt file. |
config/initdb/01-schema.sql |
New schema initialization SQL for Docker MariaDB. |
benchmark_models.py |
New benchmark runner that starts/stops FLM per model and evaluates parsing/risk output. |
benchmark_results.json |
Adds sample benchmark output data used in documentation. |
README.md |
Updates setup/docs for FLM/Ollama, Docker DB/tools, .env config, and benchmark guidance. |
Modelfile.test |
Adds a test Modelfile variant. |
.env.example |
Adds env template for backend selection, model endpoints, DB credentials, and prompt path. |
.dockerignore |
Adds dockerignore defaults (venv, caches, .env, etc.). |
.gitignore |
Adds .gitnexus ignore entry. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| DB_HOST = os.getenv("METATRON_DB_HOST", "localhost") | ||
| DB_USER = os.getenv("METATRON_DB_USER", "metatron") | ||
| DB_PASS = os.getenv("METATRON_DB_PASS", "metatron123") | ||
| DB_NAME = os.getenv("METATRON_DB_NAME", "metatron") | ||
|
|
There was a problem hiding this comment.
Like db.py, this module reads DB settings from environment variables but does not load .env. If users follow the README/.env.example workflow, exports may still use the hard-coded defaults unless .env is sourced elsewhere. Consider calling dotenv.load_dotenv() at startup (or sharing a single config loader) so .env-driven settings apply consistently.
| DB_HOST = os.getenv("METATRON_DB_HOST", "localhost") | ||
| DB_PORT = int(os.getenv("METATRON_DB_PORT", "3306")) | ||
| DB_USER = os.getenv("METATRON_DB_USER", "metatron") | ||
| DB_PASS = os.getenv("METATRON_DB_PASS", "metatron123") | ||
| DB_NAME = os.getenv("METATRON_DB_NAME", "metatron") | ||
|
|
||
| def get_connection(): | ||
| """Returns a MariaDB connection. No password (local setup).""" | ||
| """Returns a MariaDB connection.""" | ||
| return mysql.connector.connect( | ||
| host="localhost", | ||
| user="metatron", | ||
| password="123", | ||
| database="metatron" | ||
| host=DB_HOST, | ||
| port=DB_PORT, | ||
| user=DB_USER, | ||
| password=DB_PASS, | ||
| database=DB_NAME | ||
| ) |
There was a problem hiding this comment.
DB_* values are read from environment variables, but this module does not load .env and the provided start.sh also doesn’t source .env. As a result, users configuring credentials in .env may still connect with defaults. Consider calling dotenv.load_dotenv() at process startup (e.g., in metatron.py before importing db/export) or loading it in db.py itself.
| DB_HOST = os.getenv("METATRON_DB_HOST", "localhost") | ||
| DB_USER = os.getenv("METATRON_DB_USER", "metatron") | ||
| DB_PASS = os.getenv("METATRON_DB_PASS", "metatron123") | ||
| DB_NAME = os.getenv("METATRON_DB_NAME", "metatron") | ||
|
|
||
|
|
||
| def get_connection(): | ||
| return mysql.connector.connect( | ||
| host="localhost", | ||
| user="metatron", | ||
| password="123", | ||
| database="metatron" | ||
| host=DB_HOST, | ||
| user=DB_USER, | ||
| password=DB_PASS, | ||
| database=DB_NAME | ||
| ) |
There was a problem hiding this comment.
export.py reads DB connection info from env vars but does not support METATRON_DB_PORT (db.py does). If MariaDB is exposed on a non-3306 port (common with Docker), exports will fail to connect. Add DB_PORT handling here for consistency with db.py.
| MYSQL_ROOT_PASSWORD: rootpass | ||
| MYSQL_DATABASE: metatron | ||
| MYSQL_USER: metatron | ||
| MYSQL_PASSWORD: metatron123 |
There was a problem hiding this comment.
The MariaDB credentials are hard-coded in docker-compose.yml. Since the PR introduces external configuration via .env, consider using Compose env var substitution (e.g., ${MYSQL_PASSWORD}) and/or a local-only .env file to avoid committing real credentials and to keep config in one place.
| MYSQL_ROOT_PASSWORD: rootpass | |
| MYSQL_DATABASE: metatron | |
| MYSQL_USER: metatron | |
| MYSQL_PASSWORD: metatron123 | |
| MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD} | |
| MYSQL_DATABASE: ${MYSQL_DATABASE} | |
| MYSQL_USER: ${MYSQL_USER} | |
| MYSQL_PASSWORD: ${MYSQL_PASSWORD} |
| command: sleep infinity | ||
| cap_add: | ||
| - NET_RAW | ||
| network_mode: host |
There was a problem hiding this comment.
docker-compose.tools.yml runs the tools container with host networking and NET_RAW capability. That’s a significant privilege escalation surface. If this is required for nmap, consider documenting the risk clearly and/or narrowing privileges (e.g., avoid host networking when possible, add only the minimum caps needed).
| network_mode: host |
| echo "[*] Waiting for MariaDB to be ready..." | ||
| for i in {1..30}; do | ||
| if docker exec metatron-mariadb healthcheck.sh --connect --innodb_initialized &> /dev/null; then | ||
| echo "[+] MariaDB is ready." | ||
| break | ||
| fi | ||
| sleep 1 | ||
| done | ||
|
|
There was a problem hiding this comment.
The “wait for MariaDB” loop doesn’t fail if the DB never becomes healthy; the script will continue and metatron.py will likely exit with a DB error. Consider exiting with a clear message after the loop (or using docker compose up --wait) so failures are surfaced deterministically.
| echo "[*] Waiting for MariaDB to be ready..." | |
| for i in {1..30}; do | |
| if docker exec metatron-mariadb healthcheck.sh --connect --innodb_initialized &> /dev/null; then | |
| echo "[+] MariaDB is ready." | |
| break | |
| fi | |
| sleep 1 | |
| done | |
| echo "[*] Waiting for MariaDB to be ready..." | |
| mariadb_ready=false | |
| for i in {1..30}; do | |
| if docker exec metatron-mariadb healthcheck.sh --connect --innodb_initialized &> /dev/null; then | |
| echo "[+] MariaDB is ready." | |
| mariadb_ready=true | |
| break | |
| fi | |
| sleep 1 | |
| done | |
| if [ "$mariadb_ready" != true ]; then | |
| echo "[!] MariaDB did not become healthy within 30 seconds." | |
| exit 1 | |
| fi |
| if re.match(r'VULN\s*:', part, re.IGNORECASE): | ||
| vuln["vuln_name"] = re.sub(r'(?i)^VULN\s*:', '', part).strip() | ||
| elif re.match(r'SEVER\w*?:', part, re.IGNORECASE): | ||
| vuln["severity"] = re.sub(r'(?i)^SEVER\w*?:', '', part).strip().lower() | ||
| elif re.match(r'PORT\s*:', part, re.IGNORECASE): |
There was a problem hiding this comment.
Severity parsing currently accepts arbitrary strings (e.g., values like "medium/high" seen in benchmark output). Downstream (e.g., export severity color mapping) expects a finite set (critical/high/medium/low/info/unknown). Normalize parsed values to the closest supported level (e.g., split on non-letters and take the highest) to avoid inconsistent DB/reporting behavior.
|
|
||
| BACKEND = os.getenv("METATRON_LLM_BACKEND", "flm").lower() | ||
|
|
||
| FLM_BASE_URL = os.getenv("FLM_BASE_URL", "http://localhost:11434/v1") |
There was a problem hiding this comment.
The default FLM_BASE_URL points to port 11434 (Ollama’s default). If METATRON_LLM_BACKEND=flm and the user doesn’t set FLM_BASE_URL, requests will go to the wrong service/port. Align the default with the documented/example FLM port (e.g., 8000) or require FLM_BASE_URL explicitly when backend=flm.
| FLM_BASE_URL = os.getenv("FLM_BASE_URL", "http://localhost:11434/v1") | |
| FLM_BASE_URL = os.getenv("FLM_BASE_URL", "http://localhost:8000/v1") |
| # ── Check FLM ──────────────────────────────── | ||
| FLM_PORT=$(grep "^FLM_BASE_URL=" .env 2>/dev/null | grep -oP '(?<=:)[0-9]+' || echo "8000") | ||
| if ! ss -tlnp 2>/dev/null | grep -q ":$FLM_PORT "; then | ||
| echo "[!] FLM server not detected on port $FLM_PORT." | ||
| echo " Start it manually with:" | ||
| echo " flm serve qwen3.5:4b --port $FLM_PORT --ctx-len 16384 --pmode performance" | ||
| echo "" | ||
| read -p "Press Enter to continue anyway, or Ctrl+C to abort..." | ||
| else |
There was a problem hiding this comment.
start.sh reads values from .env (for FLM_PORT) but never sources it, and Python only sees env vars that are exported. If the intent is “external configuration via .env”, either set -a; source .env; set +a in this script or rely consistently on python-dotenv in the Python entrypoint so DB/LLM settings are actually applied.
| parts = line.split("|") | ||
| for part in parts: | ||
| part = part.strip() | ||
| if part.startswith("EXPLOIT:"): | ||
| exploit["exploit_name"] = part.replace("EXPLOIT:", "").strip() | ||
| elif part.startswith("TOOL:"): | ||
| exploit["tool_used"] = part.replace("TOOL:", "").strip() | ||
| elif part.startswith("PAYLOAD:"): | ||
| exploit["payload"] = part.replace("PAYLOAD:", "").strip() | ||
| if re.match(r'EXPLOIT\s*:', part, re.IGNORECASE): | ||
| exploit["exploit_name"] = re.sub(r'(?i)^EXPLOIT\s*:', '', part).strip() |
There was a problem hiding this comment.
This block now parses EXPLOIT fields case-insensitively, but exploits are still only entered when the header line matches the exact uppercase prefix (the line.startswith("EXPLOIT:") check a few lines above). To avoid silently missing exploits, make the header detection case/whitespace tolerant as well (e.g., a regex match similar to the field parsing).
|
Hi,
Thank you for this massive contribution! This PR significantly modernizes
*METATRON* by introducing containerization, external configuration, and
flexible LLM backends. The addition of the FLM benchmark suite is
particularly impressive for performance tracking.
However, after reviewing the changes, there are a few critical points that
need to be addressed before merging to ensure stability and security:
1.
*Environment Variables & Consistency:*
-
db.py and export.py currently do not call load_dotenv(). This might
cause the application to ignore .env settings and fall back to
hardcoded defaults.
-
export.py is missing the DB_PORT variable, which is present in db.py.
We should ensure consistency across all database-connected modules.
2.
*Security (Docker Compose):*
-
In docker-compose.tools.yml, the use of network_mode: host and NET_RAW
capabilities poses a security risk. While I understand this is for
nmap functionality, we should document this risk clearly or try to
limit privileges.
-
It is highly recommended to use environment variable interpolation in
docker-compose.yml (e.g., ${MYSQL_PASSWORD}) instead of hardcoding
credentials like metatron123.
3.
*LLM Parser Robustness:*
-
The regex updates in llm.py are a great step forward, but some logic
(like line.startswith("EXPLOIT:")) remains case-sensitive. It would
be safer to make the header detection case-insensitive to match the field
parsing logic.
4.
*Start Script Logic:*
-
In start.sh, the MariaDB health check loop should exit with an error
code if the database fails to become ready within the 30-second window.
Currently, it continues execution, which will lead to a crash in
metatron.py.
Please let me know your thoughts on these points. I’d love to merge this
once these stability and security tweaks are implemented!
Best regards,
في الخميس، 16 أبريل 2026 في 10:45 م تمت كتابة ما يلي بواسطة Copilot <
***@***.***>:
… ***@***.**** commented on this pull request.
Pull request overview
This PR modernizes METATRON’s local runtime by adding a switchable
FLM/Ollama LLM layer, containerizing supporting infrastructure (MariaDB +
pentest tools fallback), and moving runtime configuration into external
files/env vars.
*Changes:*
- Introduces a unified LLM backend interface (llm_backends.py) and
updates llm.py to use it, including more tolerant parsing and an
externalized system prompt.
- Adds Docker Compose stacks for MariaDB and a “tools” container
fallback, plus a start.sh launcher.
- Adds an FLM benchmarking script + sample results and updates
docs/config templates accordingly.
Reviewed changes
Copilot reviewed 17 out of 19 changed files in this pull request and
generated 10 comments.
Show a summary per file
File Description
tools.py Adds local tool execution with Docker fallback via docker exec
metatron-tools.
start.sh New environment launcher that starts Docker services, sets up
venv/deps, and launches the CLI.
requirements.txt Adds openai and python-dotenv dependencies for the new
backend/config approach.
metatron.py Updates DB connection failure guidance to include Docker
startup instructions.
llm_backends.py New unified FLM (OpenAI-compatible) + Ollama backend
selector via env var.
llm.py Switches to unified backend, loads system prompt from file, and
expands parsers to handle more output formats.
export.py Moves DB connection details to env vars (external config).
docker-compose.yml New MariaDB container with init scripts and
healthcheck.
docker-compose.tools.yml New Parrot OS tools container to run missing
pentest tools.
db.py Moves DB connection details to env vars (external config).
config/system_prompt.txt New external system prompt file.
config/initdb/01-schema.sql New schema initialization SQL for Docker
MariaDB.
benchmark_models.py New benchmark runner that starts/stops FLM per model
and evaluates parsing/risk output.
benchmark_results.json Adds sample benchmark output data used in
documentation.
README.md Updates setup/docs for FLM/Ollama, Docker DB/tools, .env
config, and benchmark guidance.
Modelfile.test Adds a test Modelfile variant.
.env.example Adds env template for backend selection, model endpoints, DB
credentials, and prompt path.
.dockerignore Adds dockerignore defaults (venv, caches, .env, etc.).
.gitignore Adds .gitnexus ignore entry.
------------------------------
💡 Add Copilot custom instructions
<http:///sooryathejas/METATRON/new/main?filename=.github/instructions/*.instructions.md>
for smarter, more guided reviews. Learn how to get started
<https://docs.github.com/en/copilot/customizing-copilot/adding-repository-custom-instructions-for-github-copilot>
.
------------------------------
In export.py
<#22 (comment)>:
> +DB_HOST = os.getenv("METATRON_DB_HOST", "localhost")
+DB_USER = os.getenv("METATRON_DB_USER", "metatron")
+DB_PASS = os.getenv("METATRON_DB_PASS", "metatron123")
+DB_NAME = os.getenv("METATRON_DB_NAME", "metatron")
+
Like db.py, this module reads DB settings from environment variables but
does not load .env. If users follow the README/.env.example workflow,
exports may still use the hard-coded defaults unless .env is sourced
elsewhere. Consider calling dotenv.load_dotenv() at startup (or sharing a
single config loader) so .env-driven settings apply consistently.
------------------------------
In db.py
<#22 (comment)>:
> +DB_HOST = os.getenv("METATRON_DB_HOST", "localhost")
+DB_PORT = int(os.getenv("METATRON_DB_PORT", "3306"))
+DB_USER = os.getenv("METATRON_DB_USER", "metatron")
+DB_PASS = os.getenv("METATRON_DB_PASS", "metatron123")
+DB_NAME = os.getenv("METATRON_DB_NAME", "metatron")
+
def get_connection():
- """Returns a MariaDB connection. No password (local setup)."""
+ """Returns a MariaDB connection."""
return mysql.connector.connect(
- host="localhost",
- user="metatron",
- password="123",
- database="metatron"
+ host=DB_HOST,
+ port=DB_PORT,
+ user=DB_USER,
+ password=DB_PASS,
+ database=DB_NAME
)
DB_* values are read from environment variables, but this module does not
load .env and the provided start.sh also doesn’t source .env. As a
result, users configuring credentials in .env may still connect with
defaults. Consider calling dotenv.load_dotenv() at process startup (e.g.,
in metatron.py before importing db/export) or loading it in db.py itself.
------------------------------
In export.py
<#22 (comment)>:
> +DB_HOST = os.getenv("METATRON_DB_HOST", "localhost")
+DB_USER = os.getenv("METATRON_DB_USER", "metatron")
+DB_PASS = os.getenv("METATRON_DB_PASS", "metatron123")
+DB_NAME = os.getenv("METATRON_DB_NAME", "metatron")
+
def get_connection():
return mysql.connector.connect(
- host="localhost",
- user="metatron",
- password="123",
- database="metatron"
+ host=DB_HOST,
+ user=DB_USER,
+ password=DB_PASS,
+ database=DB_NAME
)
export.py reads DB connection info from env vars but does not support
METATRON_DB_PORT (db.py does). If MariaDB is exposed on a non-3306 port
(common with Docker), exports will fail to connect. Add DB_PORT handling
here for consistency with db.py.
------------------------------
In docker-compose.yml
<#22 (comment)>:
> + MYSQL_ROOT_PASSWORD: rootpass
+ MYSQL_DATABASE: metatron
+ MYSQL_USER: metatron
+ MYSQL_PASSWORD: metatron123
The MariaDB credentials are hard-coded in docker-compose.yml. Since the PR
introduces external configuration via .env, consider using Compose env var
substitution (e.g., ${MYSQL_PASSWORD}) and/or a local-only .env file to
avoid committing real credentials and to keep config in one place.
⬇️ Suggested change
- MYSQL_ROOT_PASSWORD: rootpass
- MYSQL_DATABASE: metatron
- MYSQL_USER: metatron
- MYSQL_PASSWORD: metatron123
+ MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
+ MYSQL_DATABASE: ${MYSQL_DATABASE}
+ MYSQL_USER: ${MYSQL_USER}
+ MYSQL_PASSWORD: ${MYSQL_PASSWORD}
------------------------------
In docker-compose.tools.yml
<#22 (comment)>:
> @@ -0,0 +1,11 @@
+version: "3.8"
+services:
+ pentest-tools:
+ image: parrotsec/security
+ container_name: metatron-tools
+ volumes:
+ - /tmp/metatron-scans:/scans
+ command: sleep infinity
+ cap_add:
+ - NET_RAW
+ network_mode: host
docker-compose.tools.yml runs the tools container with host networking and
NET_RAW capability. That’s a significant privilege escalation surface. If
this is required for nmap, consider documenting the risk clearly and/or
narrowing privileges (e.g., avoid host networking when possible, add only
the minimum caps needed).
⬇️ Suggested change
- network_mode: host
------------------------------
In start.sh
<#22 (comment)>:
> +echo "[*] Waiting for MariaDB to be ready..."
+for i in {1..30}; do
+ if docker exec metatron-mariadb healthcheck.sh --connect --innodb_initialized &> /dev/null; then
+ echo "[+] MariaDB is ready."
+ break
+ fi
+ sleep 1
+done
+
The “wait for MariaDB” loop doesn’t fail if the DB never becomes healthy;
the script will continue and metatron.py will likely exit with a DB error.
Consider exiting with a clear message after the loop (or using docker
compose up --wait) so failures are surfaced deterministically.
⬇️ Suggested change
-echo "[*] Waiting for MariaDB to be ready..."
-for i in {1..30}; do
- if docker exec metatron-mariadb healthcheck.sh --connect --innodb_initialized &> /dev/null; then
- echo "[+] MariaDB is ready."
- break
- fi
- sleep 1
-done
-
+echo "[*] Waiting for MariaDB to be ready..."
+mariadb_ready=false
+for i in {1..30}; do
+ if docker exec metatron-mariadb healthcheck.sh --connect --innodb_initialized &> /dev/null; then
+ echo "[+] MariaDB is ready."
+ mariadb_ready=true
+ break
+ fi
+ sleep 1
+done
+
+if [ "$mariadb_ready" != true ]; then
+ echo "[!] MariaDB did not become healthy within 30 seconds."
+ exit 1
+fi
------------------------------
In llm.py
<#22 (comment)>:
> + if re.match(r'VULN\s*:', part, re.IGNORECASE):
+ vuln["vuln_name"] = re.sub(r'(?i)^VULN\s*:', '', part).strip()
+ elif re.match(r'SEVER\w*?:', part, re.IGNORECASE):
+ vuln["severity"] = re.sub(r'(?i)^SEVER\w*?:', '', part).strip().lower()
+ elif re.match(r'PORT\s*:', part, re.IGNORECASE):
Severity parsing currently accepts arbitrary strings (e.g., values like
"medium/high" seen in benchmark output). Downstream (e.g., export severity
color mapping) expects a finite set
(critical/high/medium/low/info/unknown). Normalize parsed values to the
closest supported level (e.g., split on non-letters and take the highest)
to avoid inconsistent DB/reporting behavior.
------------------------------
In llm_backends.py
<#22 (comment)>:
> +"""
+
+import os
+import requests
+from dotenv import load_dotenv
+from openai import OpenAI
+
+load_dotenv()
+
+# ─────────────────────────────────────────────
+# CONFIGURATION
+# ─────────────────────────────────────────────
+
+BACKEND = os.getenv("METATRON_LLM_BACKEND", "flm").lower()
+
+FLM_BASE_URL = os.getenv("FLM_BASE_URL", "http://localhost:11434/v1")
The default FLM_BASE_URL points to port 11434 (Ollama’s default). If
METATRON_LLM_BACKEND=flm and the user doesn’t set FLM_BASE_URL, requests
will go to the wrong service/port. Align the default with the
documented/example FLM port (e.g., 8000) or require FLM_BASE_URL explicitly
when backend=flm.
⬇️ Suggested change
-FLM_BASE_URL = os.getenv("FLM_BASE_URL", "http://localhost:11434/v1")
+FLM_BASE_URL = os.getenv("FLM_BASE_URL", "http://localhost:8000/v1")
------------------------------
In start.sh
<#22 (comment)>:
> +# ── Check FLM ────────────────────────────────
+FLM_PORT=$(grep "^FLM_BASE_URL=" .env 2>/dev/null | grep -oP '(?<=:)[0-9]+' || echo "8000")
+if ! ss -tlnp 2>/dev/null | grep -q ":$FLM_PORT "; then
+ echo "[!] FLM server not detected on port $FLM_PORT."
+ echo " Start it manually with:"
+ echo " flm serve qwen3.5:4b --port $FLM_PORT --ctx-len 16384 --pmode performance"
+ echo ""
+ read -p "Press Enter to continue anyway, or Ctrl+C to abort..."
+else
start.sh reads values from .env (for FLM_PORT) but never sources it, and
Python only sees env vars that are exported. If the intent is “external
configuration via .env”, either set -a; source .env; set +a in this
script or rely consistently on python-dotenv in the Python entrypoint so
DB/LLM settings are actually applied.
------------------------------
In llm.py
<#22 (comment)>:
> parts = line.split("|")
for part in parts:
part = part.strip()
- if part.startswith("EXPLOIT:"):
- exploit["exploit_name"] = part.replace("EXPLOIT:", "").strip()
- elif part.startswith("TOOL:"):
- exploit["tool_used"] = part.replace("TOOL:", "").strip()
- elif part.startswith("PAYLOAD:"):
- exploit["payload"] = part.replace("PAYLOAD:", "").strip()
+ if re.match(r'EXPLOIT\s*:', part, re.IGNORECASE):
+ exploit["exploit_name"] = re.sub(r'(?i)^EXPLOIT\s*:', '', part).strip()
This block now parses EXPLOIT fields case-insensitively, but exploits are
still only entered when the header line matches the exact uppercase prefix
(the line.startswith("EXPLOIT:") check a few lines above). To avoid
silently missing exploits, make the header detection case/whitespace
tolerant as well (e.g., a regex match similar to the field parsing).
—
Reply to this email directly, view it on GitHub
<#22 (review)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/B5HWFKNPBLNMQGGFG63ZJVT4WFIA5AVCNFSM6AAAAACX4EUMHSVHI2DSMVQWIX3LMV43YUDVNRWFEZLROVSXG5CSMV3GSZLXHM2DCMRUGYYDONRRGU>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Summary
This PR adapts METATRON for modern local LLM runtimes and containerized infrastructure:
llm_backends.pysupporting FLM (OpenAI-compatible, ideal for AMD Ryzen AI / NPU) and Ollama via environment switch (METATRON_LLM_BACKEND)docker-compose.ymland automatic schema initializationnmap,whatweb,nikto, etc.) automatically run inside a Parrot OS container (docker-compose.tools.yml).env; system prompt toconfig/system_prompt.txtllm.pyhandles both structured and markdown outputs, typo-tolerant severity/risk extractionbenchmark_models.pyauto-starts/stops FLM per model, compares response time and parsing quality across 6 modelsTest plan
METATRON_LLM_BACKEND=flm+flm serve qwen3.5:4b— full E2E scan againstscanme.nmap.orgMETATRON_LLM_BACKEND=ollama— backend connectivity and parsing validatedwhatweb/niktoworks when host tools are missingpython benchmark_models.pycompletes for all 6 models