diff --git a/.gitignore b/.gitignore index 7eb93d4..97c2c90 100644 --- a/.gitignore +++ b/.gitignore @@ -8,3 +8,4 @@ refs/* docs/* .DS_Store /zfs_implementation* +/related_projects \ No newline at end of file diff --git a/README.md b/README.md index d40a908..b27f79e 100644 --- a/README.md +++ b/README.md @@ -75,6 +75,42 @@ Supported keys (all optional): - `md_ufs_sector_size=` (default: `512`) - `md_zfs_sector_size=` (default: `512`) +## Adaptive Mount Strategy (Brute-Force Parameter Discovery) + +For `.ffpfs` (PFS) images, ShadowMountPlus includes an optional adaptive mount strategy that automatically discovers the correct mount parameters when the standard defaults don't work. This is useful for PFS dumps with non-standard metadata or sector layouts. + +**How it works:** +1. When mounting a PFS image, if the standard mount fails, the system tries a bounded set of parameter combinations. +2. Each combination (image type, raw flags, sector size, filesystem variant) is attempted in priority order. +3. After each failed attempt, the system waits 3 seconds and cleans up before trying the next combination. +4. When a successful mount is found, the winning parameters are cached and reused for future mounts of the same image. +5. If all combinations fail, the image is marked as failed and the scan moves to the next image. + +**Configuration keys (all optional, all PFS-specific):** +- `pfs_bruteforce_enabled=1|0` (enable/disable; default: `1`) +- `pfs_bruteforce_sleep_ms=` (delay between failed attempts; default: `3000`) +- `pfs_bruteforce_max_attempts=` (max attempts per image; default: `20`) +- `pfs_bruteforce_max_seconds_per_image=` (max time per image; default: `60`) +- `pfs_bruteforce_cooldown_seconds=` (skip heavy brute-force on recently exhausted images; default: `300`) +- `pfs_bruteforce_max_global_attempts=` (global attempt cap per scan window; default: `200`) +- `pfs_bruteforce_sector_sizes=` (parser-supported override key) +- `pfs_bruteforce_image_types=` (parser-supported override key) +- `pfs_bruteforce_raw_flags=` (parser-supported override key) +- `pfs_bruteforce_fstypes=` (parser-supported override key) + +**Cache storage:** +- Successful profiles are automatically cached in `/data/shadowmount/autotune.ini` with the line format: + ```ini + mount_profile=: + ``` +- The cache is consulted first on every mount, significantly accelerating subsequent mounts of the same image. + +**Example workflow:** +1. First mount of `MyGame.ffpfs` fails with standard parameters. +2. Brute-force discovers that image_type=5, raw_flags=0x8, sector_size=4096, fstype=pfs works. +3. Profile is cached in autotune.ini. +4. Next mount of `MyGame.ffpfs` tries the cached profile first and succeeds immediately. + Per-image mode override behavior: - Match is done by image file name (without path). - File names with spaces are supported. diff --git a/build.sh b/build.sh new file mode 100644 index 0000000..42dcc84 --- /dev/null +++ b/build.sh @@ -0,0 +1,61 @@ +#!/usr/bin/env bash +set -euo pipefail + +# One-command local build wrapper for macOS/Linux hosts. +# Runs the same dependency flow as CI inside Ubuntu. + +IMAGE="ubuntu:24.04" +WORKDIR="/work" +REPO_NAME="pacbrew-repo" + +if ! command -v docker >/dev/null 2>&1; then + echo "[build.sh] docker is required but not found." >&2 + echo "Install Docker Desktop (or Colima + docker CLI) and retry." >&2 + exit 1 +fi + +SCRIPT='set -euo pipefail +export DEBIAN_FRONTEND=noninteractive + +apt update +apt install -y \ + autoconf \ + automake \ + build-essential \ + clang-18 \ + curl \ + git \ + libarchive-tools \ + libtool \ + lld-18 \ + makepkg \ + meson \ + pacman-package-manager \ + pkg-config \ + xxd \ + zip + +if [[ ! -d "'"${WORKDIR}"'"/"'"${REPO_NAME}"'" ]]; then + git clone https://github.com/EchoStretch/pacbrew-repo "'"${WORKDIR}"'"/"'"${REPO_NAME}"'" +fi + +cd "'"${WORKDIR}"'"/"'"${REPO_NAME}"'"/sdk +makepkg -c -f +pacman --noconfirm -U ./ps5-payload-*.pkg.tar.gz + +cd "'"${WORKDIR}"'"/"'"${REPO_NAME}"'"/sqlite +makepkg -c -f +pacman --noconfirm -U ./ps5-payload-*.pkg.tar.gz + +cd "'"${WORKDIR}"'" +make clean all + +echo "[build.sh] build complete" +' + +echo "[build.sh] starting containerized build..." +docker run --rm -t \ + -v "$PWD:${WORKDIR}" \ + -w "${WORKDIR}" \ + "${IMAGE}" \ + bash -lc "${SCRIPT}" diff --git a/config.ini.example b/config.ini.example index 7443365..e9f1692 100644 --- a/config.ini.example +++ b/config.ini.example @@ -97,3 +97,61 @@ md_zfs_sector_size=512 # "/mnt/shadowmnt" is always added automatically. # scanpath=/data/homebrew # scanpath=/data/etaHEN/games + +# ===== Adaptive Mount Strategy (Brute-Force) for PFS Discovery ===== +# These settings control the PFS parameter discovery system. +# When a PFS image mount fails or has unknown structure, the system +# automatically tries different parameter combinations to find what works. + +# Enable adaptive mount strategy for PFS images: +# 1/true/yes/on -> enable brute-force parameter discovery (recommended for PFS) +# 0/false/no/off -> use fixed defaults only +pfs_bruteforce_enabled=1 + +# Sleep duration between failed mount attempts (milliseconds): +pfs_bruteforce_sleep_ms=200 + +# Maximum attempts per image before giving up: +pfs_bruteforce_max_attempts=20 + +# Maximum time per image (seconds) before moving to next: +pfs_bruteforce_max_seconds_per_image=60 + +# Cooldown after full exhaustion (seconds). During cooldown, heavy brute-force +# is skipped for that image so scan can continue with others. +pfs_bruteforce_cooldown_seconds=300 + +# Global cap for brute-force attempts per scan window. +pfs_bruteforce_max_global_attempts=200 + +# Optional list overrides (currently advisory; parser accepts these keys): +pfs_bruteforce_sector_sizes=4096 +pfs_bruteforce_image_types=0,5,2,3,4,6,1,7 +pfs_bruteforce_raw_flags=0x9,0x8,0xD,0xC +pfs_bruteforce_fstypes=pfs,ppr_pfs,transaction_pfs + +# Tested profiles are cached in /data/shadowmount/autotune.ini +# and reused automatically on future mounts of the same image. +# The cache section line format is: +# mount_profile=: + +# Optional probe mode: discover and store all working PFS profiles for each +# image into a dedicated file: +# /data/shadowmount/pfs_probe_.ini +# Disabled by default because it performs extra mount/unmount attempts. +pfs_probe_enabled=0 + +# Optional benchmark mode: after mounting PFS via brute-force/profile cache, +# benchmark one discovered profile per run and persist progress in: +# /data/shadowmount/benchmarking.ini +# This allows resuming after crash/reboot from the last tested profile. +pfs_bench_enabled=0 + +# Benchmark file read size target in bytes (minimum total bytes read per test +# file, chunked internally in 64 KiB reads). +pfs_bench_min_read_bytes=65536 + +# Benchmark slow-read threshold in milliseconds. Reads or read gaps over this +# threshold are counted as delays in the report. +pfs_bench_delay_threshold_ms=500 + diff --git a/create_gp4.py b/create_gp4.py new file mode 100644 index 0000000..30ee303 --- /dev/null +++ b/create_gp4.py @@ -0,0 +1,215 @@ +#!/usr/bin/env python3 +""" +create_gp4.py +Python port of the create-gp4 utility. +""" + +import argparse +import os +import sys +from datetime import datetime +from pathlib import Path +from xml.etree.ElementTree import Element, SubElement, tostring + + +def error_exit(msg: str, *params) -> None: + print(msg % params if params else msg, end="") + sys.exit(1) + + +class DirNode: + def __init__(self, name: str): + self.name = name + self.children: list["DirNode"] = [] + + +def contains_substr(items: list[str], needle: str) -> bool: + # Mirrors Go behavior: strings.Contains(a, e) + return any(needle in a for a in items) + + +def get_subdir(node: DirNode, name: str) -> DirNode | None: + if node.name == name: + return node + for child in node.children: + found = get_subdir(child, name) + if found is not None: + return found + return None + + +def get_root_dir(root: list[DirNode], name: str) -> DirNode | None: + for d in root: + if d.name == name: + return d + return None + + +def indent_xml_with_tabs(xml_str: str, base_prefix: str = "\t", indent: str = "\t") -> str: + # Keep pretty output style close to Go's xml.MarshalIndent(prefix="\t", indent="\t"). + import xml.dom.minidom as minidom + + dom = minidom.parseString(xml_str.encode("utf-8")) + pretty = dom.toprettyxml(indent=indent) + + lines = [ln for ln in pretty.splitlines() if ln.strip()] + if lines and lines[0].startswith(" str: + paths: list[str] = [] + paths_clean: list[str] = [] + root_dirs: list[DirNode] = [] + + # Keep only directory paths (remove filenames). + for f in files: + if f and "/" in f: + paths.append(Path(f).parent.as_posix()) + + # Sort by descending path length. + paths.sort(key=len, reverse=True) + + # Remove duplicate/sub-paths using Go's substring behavior. + for p in paths: + if not contains_substr(paths_clean, p): + paths_clean.append(p) + + # Build tree. + for p in paths_clean: + split = p.split("/") + if not split or not split[0]: + continue + + dir_ptr = get_root_dir(root_dirs, split[0]) + if dir_ptr is None: + d = DirNode(split[0]) + dir_ptr = d + for part in split[1:]: + child = DirNode(part) + dir_ptr.children.append(child) + dir_ptr = child + root_dirs.append(d) + else: + for part in split[1:]: + d = get_subdir(dir_ptr, part) + if d is not None: + dir_ptr = d + continue + child = DirNode(part) + dir_ptr.children.append(child) + dir_ptr = child + + # Convert to XML. + root_elem = Element("rootdir") + + def append_dir(parent_elem: Element, node: DirNode) -> None: + d = SubElement(parent_elem, "dir", {"targ_name": node.name}) + for c in node.children: + append_dir(d, c) + + for d in root_dirs: + append_dir(root_elem, d) + + xml_raw = tostring(root_elem, encoding="unicode") + return indent_xml_with_tabs(xml_raw, base_prefix="\t", indent="\t") + + +def get_file_list(files_path: str) -> list[str]: + files: list[str] = [] + root = Path(files_path).resolve() + + if not root.exists() or not root.is_dir(): + error_exit("Path does not exist or is not a directory: %s\n", files_path) + + # Recursively include all files. + for current_root, dirs, filenames in os.walk(root): + dirs.sort() + filenames.sort() + current = Path(current_root) + for fn in filenames: + full = current / fn + rel = full.relative_to(root).as_posix() + files.append(rel) + + return files + + +def parse_files_to_tags(files: list[str]) -> list[str]: + file_tags: list[str] = [] + for file in files: + if file: + f = Path(file).as_posix() + file_tags.append(f'\t\t') + return file_tags + + +def create_gp4(path: str, content_id: str, files: str, files_path: str) -> None: + if files: + file_list = files.split(" ") + else: + file_list = get_file_list(files_path) + + file_tag_list = parse_files_to_tags(file_list) + root_dir = build_rootdir_tag(file_list) + file_tags = "\n".join(file_tag_list) + current_time = datetime.now().strftime("%Y-%m-%d %H:%M:%S") + + gp4_contents = ( + "\n" + "\n" + "\t\n" + "\t\tpkg_ps4_app\n" + "\t\tPS4VOLUME\n" + f"\t\t{current_time}\n" + f"\t\t\n" + "\t\t\n" + "\t\t\t\n" + "\t\t\t\t\n" + "\t\t\t\n" + "\t\t\t\n" + "\t\t\t\t0\n" + "\t\t\t\n" + "\t\t\n" + "\t\n" + "\t\n" + f"{file_tags}" + "\n\t\n" + f"{root_dir}\n" + "\n" + ) + + with open(path, "w", encoding="utf-8", newline="\n") as out: + out.write(gp4_contents) + + +def main() -> None: + parser = argparse.ArgumentParser() + + # Support both original single-dash long flags and GNU-style double-dash flags. + parser.add_argument("-out", "--out", default="homebrew.gp4", help="output gp4 to write to") + parser.add_argument("-content-id", "--content-id", default="", help="content ID of the package") + parser.add_argument("-files", "--files", default="", help="list of files to pack into the package") + parser.add_argument("-path", "--path", default="", help="path to files to pack into the package") + + args = parser.parse_args() + + if not args.content_id: + error_exit("Content ID not specified, try -content-id=[content ID]\n") + + if not args.files and not args.path: + error_exit( + "Content files or path not specified, try -files=\"[files, separated by spaces]\" or -path=\"[path/to/files]\"\n" + ) + + try: + create_gp4(args.out, args.content_id, args.files, args.path) + except Exception as exc: + error_exit("Error writing GP4: %s\n", str(exc)) + + +if __name__ == "__main__": + main() diff --git a/include/sm_bench.h b/include/sm_bench.h new file mode 100644 index 0000000..2dd9619 --- /dev/null +++ b/include/sm_bench.h @@ -0,0 +1,97 @@ +#ifndef SM_BENCH_H +#define SM_BENCH_H + +#include +#include + +#include "sm_mount_profile.h" +#include "sm_types.h" + +// Maximum working profiles collected by one probe sweep +#define SM_PROBE_MAX_WINNERS 64 + +// Benchmark result for one profile run against one image +typedef struct { + // Profile that was tested + mount_profile_t profile; + + // Mount timing + bool mount_ok; + uint32_t mount_ms; + + // Recursive directory walk + bool dirlist_ok; + uint32_t dirlist_ms; + uint32_t dirlist_files; + uint32_t dirlist_dirs; + + // eboot.bin read (present in every PS5 game) + bool eboot_ok; + uint32_t eboot_read_ms; + uint64_t eboot_bytes; + uint32_t eboot_max_gap_ms; // max stall between consecutive 64 KB reads + + // sce_sys/param.json read (main title config JSON) + bool param_ok; + uint32_t param_read_ms; + uint64_t param_bytes; + uint32_t param_max_gap_ms; + + // Count of individual reads that exceeded the delay threshold + uint32_t slow_reads; + bool any_failed; + + // Composite latency score in ms (lower = better; 0 = this profile failed) + uint32_t score_ms; +} bench_result_t; + +// --------------------------------------------------------------------------- +// Probe persistence +// Save all working profiles found by probe sweep to +// /data/shadowmount/pfs_probe_{image_basename}.ini +void bench_save_probe(const char *image_basename, + const mount_profile_t *profiles, int count); + +// Load working profiles from pfs_probe_{image_basename}.ini. +// Returns number of profiles loaded (capped at max_count). +int bench_load_probe(const char *image_basename, + mount_profile_t *profiles_out, int max_count); + +// --------------------------------------------------------------------------- +// Benchmark execution +// Run a read-performance benchmark on an already-mounted PFS image. +// mount_point : root of the mounted filesystem (e.g. /mnt/shadowmnt/...) +// cfg : runtime config for thresholds / read sizes +// result_out : filled on return (mount_ok is always left true by caller) +// Returns true if at least the directory walk succeeded. +bool bench_run_mounted(const char *mount_point, + const runtime_config_t *cfg, + bench_result_t *result_out); + +// --------------------------------------------------------------------------- +// Benchmark persistence (all images share one benchmarking.ini) +// +// Load existing bench results for image_basename from benchmarking.ini. +// Returns number of results loaded; sets *next_to_bench_out and +// *bench_complete_out. +int bench_load_results(const char *image_basename, + bench_result_t *results_out, int max_count, + int *next_to_bench_out, bool *bench_complete_out); + +// Upsert one bench result into benchmarking.ini. +// Call with bench_complete=true and the best_idx once all profiles are done. +bool bench_save_result(const char *image_basename, + int profile_idx, const bench_result_t *result, + int total_count, bool bench_complete, int best_idx); + +// --------------------------------------------------------------------------- +// Analysis +// Return the index of the best result (lowest score, requires mount_ok). +// Returns -1 if no valid results. +int bench_find_best(const bench_result_t *results, int count); + +// Log a human-readable summary report for all profiles of one image. +void bench_log_report(const char *image_basename, + const bench_result_t *results, int count, int best_idx); + +#endif diff --git a/include/sm_brute_force.h b/include/sm_brute_force.h new file mode 100644 index 0000000..29faf5d --- /dev/null +++ b/include/sm_brute_force.h @@ -0,0 +1,76 @@ +#ifndef SM_BRUTE_FORCE_H +#define SM_BRUTE_FORCE_H + +#include +#include +#include + +#include "sm_mount_profile.h" +#include "sm_types.h" + +// Brute-force mount attempt result +typedef enum { + BRUTE_RESULT_SUCCESS = 0, + BRUTE_RESULT_ATTACH_FAILED, + BRUTE_RESULT_NMOUNT_FAILED, + BRUTE_RESULT_VALIDATION_FAILED, + BRUTE_RESULT_CLEANUP_FAILED, + BRUTE_RESULT_TIMEOUT, + BRUTE_RESULT_ABORT, +} brute_force_result_t; + +// Per-image attempt tracking +typedef struct { + uint32_t total_attempts; + uint32_t max_attempts; + uint32_t max_seconds; + time_t start_time; + bool should_stop; +} brute_attempt_state_t; + +// Candidate list generator for PFS/special mounts +// Generates up to max_count profile candidates in priority order. +// Pass stage=0 for Stage A (fast path), stage=1 for Stage B (expanded). +typedef int (*profile_generator_fn)(const char *image_path, + image_fs_type_t fs_type, + bool mount_read_only, + int stage, + mount_profile_t *candidates, + int max_count); + +// Initialize brute-force attempt tracking +void brute_attempt_state_init(brute_attempt_state_t *state, + uint32_t max_attempts, + uint32_t max_seconds_per_image); + +// Check if we should continue attempting (time/count limits) +bool brute_should_continue(const brute_attempt_state_t *state); + +// Record one attempt and check limits +bool brute_record_attempt(brute_attempt_state_t *state, + brute_force_result_t result); + +// Generate PFS brute-force candidate profiles (Stage A + B) +int brute_generate_pfs_candidates(const char *image_path, + image_fs_type_t fs_type, + bool mount_read_only, + int stage, + mount_profile_t *candidates, + int max_count); + +// Log one attempt with result +void brute_log_attempt(const char *image_path, + uint32_t attempt_index, + uint32_t total_attempts, + const mount_profile_t *profile, + brute_force_result_t result, + int errno_value); + +// Log success/cache message +void brute_log_success(const char *image_path, + const mount_profile_t *profile); + +// Log exhaustion message +void brute_log_exhausted(const char *image_path); + +#endif diff --git a/include/sm_mount_cache.h b/include/sm_mount_cache.h new file mode 100644 index 0000000..e5c9802 --- /dev/null +++ b/include/sm_mount_cache.h @@ -0,0 +1,29 @@ +#ifndef SM_MOUNT_CACHE_H +#define SM_MOUNT_CACHE_H + +#include +#include + +#include "sm_mount_profile.h" +#include "sm_types.h" + +// Lookup a cached/autotuned mount profile by image filename +// Returns true if found and populates profile_out +bool get_cached_mount_profile(const char *image_filename, + mount_profile_t *profile_out); + +// Upsert a winning mount profile into autotune.ini by image filename +// Returns true if successfully cached +bool cache_mount_profile(const char *image_filename, + const mount_profile_t *profile); + +// Format a mount profile into a compact INI-friendly string +// for storage in autotune.ini +void format_profile_for_cache(const mount_profile_t *profile, + char *buf, size_t buf_size); + +// Parse a cached profile string back from autotune.ini +bool parse_profile_from_cache(const char *cached_str, + mount_profile_t *profile_out); + +#endif diff --git a/include/sm_mount_profile.h b/include/sm_mount_profile.h new file mode 100644 index 0000000..394720f --- /dev/null +++ b/include/sm_mount_profile.h @@ -0,0 +1,71 @@ +#ifndef SM_MOUNT_PROFILE_H +#define SM_MOUNT_PROFILE_H + +#include +#include +#include + +#include "sm_types.h" + +// Mount profile tuple: one complete set of parameters for a single mount attempt +typedef struct { + // LVD/Mount protocol version + uint16_t io_version; + + // LVD image type (0=UFS_DD, 5=PFS_SAVE, 7=PFS_UNKNOWN, etc) + uint16_t image_type; + + // Raw flags before normalization (0x9, 0x8, 0xD, 0xC, etc) + uint16_t raw_flags; + + // Normalized flags (computed from raw_flags) + uint16_t normalized_flags; + + // Device sector size in bytes (4096, 32768, 65536, etc) + uint32_t sector_size; + + // Secondary unit (typically sector_size, fallback 0x10000) + uint32_t secondary_unit; + + // Filesystem type fstype for nmount ("pfs", "ppr_pfs", "transaction_pfs") + const char *fstype; + + // Budget domain ("game" or "system") + const char *budgetid; + + // Mount key mode (normally "SD", optionally "GD", "AC") + const char *mkeymode; + + // Signature verification flag (0 or 1) + uint8_t sigverify; + + // PlayGo support flag (0 or 1) + uint8_t playgo; + + // Disc flag (0 or 1) + uint8_t disc; + + // Whether the ekpfs key should be included for this attempt. + bool include_ekpfs; + + // Whether noatime should be included in nmount options. + bool supports_noatime; + + // Read-only mode flag + bool mount_read_only; + + // Optional human-readable label for logging + const char *label; +} mount_profile_t; + +// Profile validation and helpers +bool mount_profile_validate(const mount_profile_t *profile); +void mount_profile_log(const mount_profile_t *profile, const char *tag); +void mount_profile_format_compact(const mount_profile_t *profile, + char *buf, size_t buf_size); + +// Helper to create a basic profile with sensible defaults +mount_profile_t mount_profile_create_default(image_fs_type_t fs_type, + bool mount_read_only); + +#endif diff --git a/include/sm_types.h b/include/sm_types.h index 84bc23a..d61fdad 100644 --- a/include/sm_types.h +++ b/include/sm_types.h @@ -125,6 +125,25 @@ typedef struct runtime_config { uint32_t md_sector_exfat; uint32_t md_sector_ufs; uint32_t md_sector_zfs; + // Adaptive mount strategy (brute-force) configuration + bool pfs_bruteforce_enabled; + uint32_t pfs_bruteforce_sleep_ms; + uint32_t pfs_bruteforce_max_attempts; + uint32_t pfs_bruteforce_max_seconds_per_image; + uint32_t pfs_bruteforce_cooldown_seconds; + uint32_t pfs_bruteforce_max_global_attempts_per_scan; + char pfs_bruteforce_sector_sizes[64]; + char pfs_bruteforce_image_types[64]; + char pfs_bruteforce_raw_flags[64]; + char pfs_bruteforce_fstypes[64]; + // Probe mode: collect ALL working mount profiles (disabled by default) + bool pfs_probe_enabled; + // Benchmark mode: measure read performance for each working profile (disabled by default) + bool pfs_bench_enabled; + // Minimum bytes to read per benchmark test file (default 65536) + uint32_t pfs_bench_min_read_bytes; + // Read latency threshold in ms above which a read counts as slow (default 500) + uint32_t pfs_bench_delay_threshold_ms; } runtime_config_t; typedef enum { diff --git a/src/sm_bench.c b/src/sm_bench.c new file mode 100644 index 0000000..78d4329 --- /dev/null +++ b/src/sm_bench.c @@ -0,0 +1,884 @@ +#include "sm_bench.h" + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "sm_limits.h" +#include "sm_log.h" +#include "sm_mount_cache.h" +#include "sm_paths.h" +#include "sm_time.h" + +// --------------------------------------------------------------------------- +// Internal constants + +#define BENCH_INI_PATH "/data/shadowmount/benchmarking.ini" +#define PROBE_INI_PREFIX "/data/shadowmount/pfs_probe_" +#define PROBE_INI_SUFFIX ".ini" + +#define BENCH_LINE_LEN 768 +#define BENCH_READ_BUF 65536u // 64 KB read chunk for file bench +#define BENCH_MAX_DIR_DEPTH 8 // recursion limit for dir walk +#define BENCH_MAX_READ_BYTES (256u * 1024u * 1024u) // hard cap 256 MB per file + +// Result serialization field separator +#define RS "," + +// --------------------------------------------------------------------------- +// Helpers + +static uint32_t us_to_ms(uint64_t us) { + return (uint32_t)(us / 1000u); +} + +static uint64_t elapsed_ms_since(uint64_t start_us) { + uint64_t now = monotonic_time_us(); + if (now <= start_us) + return 0; + return (now - start_us) / 1000u; +} + +// Write contents atomically via a temp file + rename. +// buf is not necessarily NUL-terminated up to len. +static bool atomic_write_file(const char *path, const char *buf, size_t len) { + char tmp[MAX_PATH]; + snprintf(tmp, sizeof(tmp), "%s.tmp", path); + + int fd = open(tmp, O_WRONLY | O_CREAT | O_TRUNC, 0644); + if (fd < 0) + return false; + + size_t written = 0; + while (written < len) { + ssize_t w = write(fd, buf + written, len - written); + if (w < 0) { + close(fd); + (void)unlink(tmp); + return false; + } + written += (size_t)w; + } + close(fd); + if (rename(tmp, path) != 0) { + (void)unlink(tmp); + return false; + } + return true; +} + +// Read entire file into a malloc'd buffer; caller must free. Returns NULL on +// failure. *len_out is set to file size. +static char *slurp_file(const char *path, size_t *len_out) { + *len_out = 0; + struct stat st; + if (stat(path, &st) != 0) + return NULL; + if (st.st_size <= 0 || st.st_size > 8 * 1024 * 1024) + return NULL; // refuse >8 MB for the bench file to keep memory bounded + + char *buf = malloc((size_t)st.st_size + 1); + if (!buf) + return NULL; + + int fd = open(path, O_RDONLY); + if (fd < 0) { + free(buf); + return NULL; + } + + ssize_t r = read(fd, buf, (size_t)st.st_size); + close(fd); + if (r < 0) { + free(buf); + return NULL; + } + buf[r] = '\0'; + *len_out = (size_t)r; + return buf; +} + +// Trim leading/trailing ASCII whitespace in-place; return pointer into s. +static char *bench_trim(char *s) { + while (*s == ' ' || *s == '\t' || *s == '\r' || *s == '\n') + s++; + char *e = s + strlen(s); + while (e > s && (e[-1] == ' ' || e[-1] == '\t' || e[-1] == '\r' || + e[-1] == '\n')) + e--; + *e = '\0'; + return s; +} + +// Sanitize image_basename for use in a filename: replace '/' and '\0' with '_'. +static void safe_basename(const char *in, char *out, size_t out_size) { + size_t i; + for (i = 0; i < out_size - 1 && in[i]; i++) { + char c = in[i]; + out[i] = (c == '/' || c == '\\') ? '_' : c; + } + out[i] = '\0'; +} + +// --------------------------------------------------------------------------- +// Result serialization + +// Format bench_result_t into a single-line string (no newline). +static void format_bench_result(const bench_result_t *r, + char *buf, size_t buf_size) { + snprintf(buf, buf_size, + "mount_ok=%u,mount_ms=%u," + "dl_ok=%u,dl_ms=%u,dl_f=%u,dl_d=%u," + "eb_ok=%u,eb_ms=%u,eb_b=%llu,eb_gap=%u," + "pr_ok=%u,pr_ms=%u,pr_b=%llu,pr_gap=%u," + "slow=%u,fail=%u,score=%u", + r->mount_ok ? 1u : 0u, r->mount_ms, + r->dirlist_ok ? 1u : 0u, r->dirlist_ms, + r->dirlist_files, r->dirlist_dirs, + r->eboot_ok ? 1u : 0u, r->eboot_read_ms, + (unsigned long long)r->eboot_bytes, r->eboot_max_gap_ms, + r->param_ok ? 1u : 0u, r->param_read_ms, + (unsigned long long)r->param_bytes, r->param_max_gap_ms, + r->slow_reads, r->any_failed ? 1u : 0u, + r->score_ms); +} + +// Parse a bench_result string (produced by format_bench_result) back into r. +static bool parse_bench_result(const char *str, bench_result_t *r) { + if (!str || !r) + return false; + memset(r, 0, sizeof(*r)); + + char buf[BENCH_LINE_LEN]; + (void)strlcpy(buf, str, sizeof(buf)); + + char *p = buf; + char *token, *saveptr = NULL; + for (token = strtok_r(p, ",", &saveptr); token; + token = strtok_r(NULL, ",", &saveptr)) { + char *eq = strchr(token, '='); + if (!eq) + continue; + *eq = '\0'; + const char *k = bench_trim(token); + const char *v = eq + 1; + unsigned long long ull; + unsigned long ul; + + if (strcmp(k, "mount_ok") == 0) { ul = strtoul(v, NULL, 10); r->mount_ok = (ul != 0); } + else if (strcmp(k, "mount_ms") == 0) { r->mount_ms = (uint32_t)strtoul(v, NULL, 10); } + else if (strcmp(k, "dl_ok") == 0) { ul = strtoul(v, NULL, 10); r->dirlist_ok = (ul != 0); } + else if (strcmp(k, "dl_ms") == 0) { r->dirlist_ms = (uint32_t)strtoul(v, NULL, 10); } + else if (strcmp(k, "dl_f") == 0) { r->dirlist_files = (uint32_t)strtoul(v, NULL, 10); } + else if (strcmp(k, "dl_d") == 0) { r->dirlist_dirs = (uint32_t)strtoul(v, NULL, 10); } + else if (strcmp(k, "eb_ok") == 0) { ul = strtoul(v, NULL, 10); r->eboot_ok = (ul != 0); } + else if (strcmp(k, "eb_ms") == 0) { r->eboot_read_ms = (uint32_t)strtoul(v, NULL, 10); } + else if (strcmp(k, "eb_b") == 0) { ull = strtoull(v, NULL, 10); r->eboot_bytes = ull; } + else if (strcmp(k, "eb_gap") == 0) { r->eboot_max_gap_ms = (uint32_t)strtoul(v, NULL, 10); } + else if (strcmp(k, "pr_ok") == 0) { ul = strtoul(v, NULL, 10); r->param_ok = (ul != 0); } + else if (strcmp(k, "pr_ms") == 0) { r->param_read_ms = (uint32_t)strtoul(v, NULL, 10); } + else if (strcmp(k, "pr_b") == 0) { ull = strtoull(v, NULL, 10); r->param_bytes = ull; } + else if (strcmp(k, "pr_gap") == 0) { r->param_max_gap_ms = (uint32_t)strtoul(v, NULL, 10); } + else if (strcmp(k, "slow") == 0) { r->slow_reads = (uint32_t)strtoul(v, NULL, 10); } + else if (strcmp(k, "fail") == 0) { ul = strtoul(v, NULL, 10); r->any_failed = (ul != 0); } + else if (strcmp(k, "score") == 0) { r->score_ms = (uint32_t)strtoul(v, NULL, 10); } + } + return true; +} + +// --------------------------------------------------------------------------- +// Probe persistence + +void bench_save_probe(const char *image_basename, + const mount_profile_t *profiles, int count) { + if (!image_basename || !profiles || count <= 0) + return; + + char safe[MAX_PATH]; + safe_basename(image_basename, safe, sizeof(safe)); + + char path[MAX_PATH]; + snprintf(path, sizeof(path), "%s%s%s", PROBE_INI_PREFIX, safe, PROBE_INI_SUFFIX); + + // Build file content + char *content = malloc(count * 512 + 128); + if (!content) + return; + + int pos = 0; + pos += snprintf(content + pos, (size_t)(count * 512 + 128 - pos), + "# ShadowMount PFS probe results - auto-generated\n" + "profile_count=%d\n", count); + + for (int i = 0; i < count; i++) { + char profile_str[256]; + format_profile_for_cache(&profiles[i], profile_str, sizeof(profile_str)); + pos += snprintf(content + pos, (size_t)(count * 512 + 128 - pos), + "profile_%d=%s\n", i, profile_str); + } + + if (!atomic_write_file(path, content, (size_t)pos)) + log_debug(" [BENCH] failed to save probe file: %s", path); + else + log_debug(" [BENCH] saved %d probe profiles to: %s", count, path); + + free(content); +} + +int bench_load_probe(const char *image_basename, + mount_profile_t *profiles_out, int max_count) { + if (!image_basename || !profiles_out || max_count <= 0) + return 0; + + char safe[MAX_PATH]; + safe_basename(image_basename, safe, sizeof(safe)); + + char path[MAX_PATH]; + snprintf(path, sizeof(path), "%s%s%s", PROBE_INI_PREFIX, safe, PROBE_INI_SUFFIX); + + size_t len = 0; + char *content = slurp_file(path, &len); + if (!content) + return 0; + + int loaded = 0; + char *line, *saveptr = NULL; + for (line = strtok_r(content, "\n", &saveptr); line && loaded < max_count; + line = strtok_r(NULL, "\n", &saveptr)) { + line = bench_trim(line); + if (line[0] == '#' || line[0] == '\0') + continue; + + // Look for profile_N= + const char *prefix = "profile_"; + if (strncmp(line, prefix, strlen(prefix)) != 0) + continue; + + char *eq = strchr(line, '='); + if (!eq) + continue; + + const char *profile_str = eq + 1; + if (parse_profile_from_cache(profile_str, &profiles_out[loaded])) + loaded++; + } + + free(content); + log_debug(" [BENCH] loaded %d probe profiles from: %s", loaded, path); + return loaded; +} + +// --------------------------------------------------------------------------- +// Benchmark execution + +// Recursive directory walk; increments *files and *dirs. +// depth_remaining prevents infinite recursion. +static void walk_dir(const char *path, uint32_t *files, uint32_t *dirs, + int depth_remaining) { + if (depth_remaining <= 0) + return; + + DIR *d = opendir(path); + if (!d) + return; + + struct dirent *ent; + while ((ent = readdir(d)) != NULL) { + if (ent->d_name[0] == '.') + continue; + + char child[MAX_PATH]; + snprintf(child, sizeof(child), "%s/%s", path, ent->d_name); + + bool is_dir = false; + if (ent->d_type == DT_DIR) { + is_dir = true; + } else if (ent->d_type == DT_UNKNOWN) { + struct stat st; + if (stat(child, &st) == 0) + is_dir = S_ISDIR(st.st_mode); + } + + if (is_dir) { + (*dirs)++; + walk_dir(child, files, dirs, depth_remaining - 1); + } else { + (*files)++; + } + } + closedir(d); +} + +// Search for a file by name (case-insensitive) under root, depth-limited. +// Writes found path into out (size out_size). Returns true if found. +static bool find_file_ci(const char *root, const char *name, + int depth_remaining, char *out, size_t out_size) { + if (depth_remaining <= 0) + return false; + + DIR *d = opendir(root); + if (!d) + return false; + + bool found = false; + struct dirent *ent; + while (!found && (ent = readdir(d)) != NULL) { + if (ent->d_name[0] == '.') + continue; + + char child[MAX_PATH]; + snprintf(child, sizeof(child), "%s/%s", root, ent->d_name); + + bool is_dir = false; + if (ent->d_type == DT_DIR) { + is_dir = true; + } else if (ent->d_type == DT_UNKNOWN) { + struct stat st; + if (stat(child, &st) == 0) + is_dir = S_ISDIR(st.st_mode); + } + + if (!is_dir && strcasecmp(ent->d_name, name) == 0) { + (void)strlcpy(out, child, out_size); + found = true; + } else if (is_dir) { + found = find_file_ci(child, name, depth_remaining - 1, out, out_size); + } + } + closedir(d); + return found; +} + +// Read a file measuring timing and stall detection. +// Returns bytes read; fills *total_ms, *max_gap_ms, *slow_reads. +static uint64_t bench_read_file(const char *path, + uint32_t min_read_bytes, + uint32_t delay_threshold_ms, + uint32_t *total_ms_out, + uint32_t *max_gap_ms_out, + uint32_t *slow_reads_out) { + *total_ms_out = 0; + *max_gap_ms_out = 0; + *slow_reads_out = 0; + + int fd = open(path, O_RDONLY); + if (fd < 0) + return 0; + + char *buf = malloc(BENCH_READ_BUF); + if (!buf) { + close(fd); + return 0; + } + + uint64_t total_bytes = 0; + uint32_t max_cap = min_read_bytes; + if (max_cap < BENCH_READ_BUF) + max_cap = BENCH_READ_BUF; + if (max_cap > BENCH_MAX_READ_BYTES) + max_cap = BENCH_MAX_READ_BYTES; + + uint64_t start_us = monotonic_time_us(); + uint64_t prev_us = start_us; + + while (total_bytes < (uint64_t)max_cap) { + uint64_t chunk_start = monotonic_time_us(); + ssize_t r = read(fd, buf, BENCH_READ_BUF); + if (r <= 0) + break; + total_bytes += (uint64_t)r; + + uint64_t chunk_end = monotonic_time_us(); + uint32_t chunk_ms = us_to_ms(chunk_end - chunk_start); + + // Gap between end of previous read and start of this one + uint32_t gap_ms = us_to_ms(chunk_start - prev_us); + prev_us = chunk_end; + + if (gap_ms > *max_gap_ms_out) + *max_gap_ms_out = gap_ms; + if (chunk_ms > delay_threshold_ms || gap_ms > delay_threshold_ms) + (*slow_reads_out)++; + } + + uint64_t end_us = monotonic_time_us(); + *total_ms_out = us_to_ms(end_us - start_us); + + free(buf); + close(fd); + (void)elapsed_ms_since; // suppress unused warning + return total_bytes; +} + +bool bench_run_mounted(const char *mount_point, + const runtime_config_t *cfg, + bench_result_t *result_out) { + if (!mount_point || !cfg || !result_out) + return false; + + uint32_t min_read = cfg->pfs_bench_min_read_bytes; + if (min_read < BENCH_READ_BUF) + min_read = BENCH_READ_BUF; + + uint32_t delay_thresh = cfg->pfs_bench_delay_threshold_ms; + if (delay_thresh == 0) + delay_thresh = 500u; + + result_out->mount_ok = true; // caller guarantees this + + // --- Directory walk --- + uint64_t dl_start = monotonic_time_us(); + result_out->dirlist_files = 0; + result_out->dirlist_dirs = 0; + walk_dir(mount_point, &result_out->dirlist_files, + &result_out->dirlist_dirs, BENCH_MAX_DIR_DEPTH); + result_out->dirlist_ms = us_to_ms(monotonic_time_us() - dl_start); + result_out->dirlist_ok = (result_out->dirlist_files > 0 || + result_out->dirlist_dirs > 0); + + log_debug(" [BENCH] dirlist: ok=%d ms=%u files=%u dirs=%u", + result_out->dirlist_ok ? 1 : 0, + result_out->dirlist_ms, + result_out->dirlist_files, + result_out->dirlist_dirs); + + // --- eboot.bin read --- + char eboot_path[MAX_PATH]; + eboot_path[0] = '\0'; + (void)find_file_ci(mount_point, "eboot.bin", 4, eboot_path, sizeof(eboot_path)); + + if (eboot_path[0] != '\0') { + result_out->eboot_bytes = bench_read_file( + eboot_path, min_read, delay_thresh, + &result_out->eboot_read_ms, + &result_out->eboot_max_gap_ms, + &result_out->slow_reads); + result_out->eboot_ok = (result_out->eboot_bytes > 0); + log_debug(" [BENCH] eboot.bin: ok=%d ms=%u bytes=%llu max_gap=%u slow=%u", + result_out->eboot_ok ? 1 : 0, + result_out->eboot_read_ms, + (unsigned long long)result_out->eboot_bytes, + result_out->eboot_max_gap_ms, + result_out->slow_reads); + } else { + log_debug(" [BENCH] eboot.bin: not found under %s", mount_point); + } + + // --- param.json read (sce_sys/param.json) --- + char param_path[MAX_PATH]; + param_path[0] = '\0'; + (void)find_file_ci(mount_point, "param.json", 4, param_path, sizeof(param_path)); + + uint32_t param_slow = 0; + if (param_path[0] != '\0') { + result_out->param_bytes = bench_read_file( + param_path, min_read, delay_thresh, + &result_out->param_read_ms, + &result_out->param_max_gap_ms, + ¶m_slow); + result_out->slow_reads += param_slow; + result_out->param_ok = (result_out->param_bytes > 0); + log_debug(" [BENCH] param.json: ok=%d ms=%u bytes=%llu max_gap=%u", + result_out->param_ok ? 1 : 0, + result_out->param_read_ms, + (unsigned long long)result_out->param_bytes, + result_out->param_max_gap_ms); + } else { + log_debug(" [BENCH] param.json: not found under %s", mount_point); + } + + result_out->any_failed = (!result_out->dirlist_ok && !result_out->eboot_ok); + + // Compute composite score: sum of key latencies weighted toward actual reads. + // Failures add a large penalty so they sort to the end. + if (result_out->dirlist_ok || result_out->eboot_ok) { + uint32_t score = result_out->mount_ms + + result_out->dirlist_ms + + result_out->eboot_read_ms + + result_out->param_read_ms + + result_out->eboot_max_gap_ms * 2u + + result_out->param_max_gap_ms * 2u + + result_out->slow_reads * 500u; + result_out->score_ms = (score == 0) ? 1u : score; + } else { + result_out->score_ms = 0; // indicates failure + } + + return result_out->dirlist_ok || result_out->eboot_ok; +} + +// --------------------------------------------------------------------------- +// Benchmark persistence + +// A parsed benchmarking.ini is stored in memory as a flat list of sections. +// Each section has a name (the image basename) and a list of key=value pairs. +// Since we rewrite the whole file on each save, we keep it simple. + +// Build the path to benchmarking.ini +static const char *bench_ini_path(void) { + return BENCH_INI_PATH; +} + +// Parse the INI into a malloc'd buffer of lines, grouped by section. +// Returns the raw file content; caller frees. Out-parameters point into it. +// We locate the section for image_basename and find specific keys. + +typedef struct { + char key[64]; + char value[BENCH_LINE_LEN]; +} bench_kv_t; + +typedef struct { + char name[MAX_PATH]; // section name (image basename) + bench_kv_t *kvs; // dynamically allocated + int kv_count; + int kv_cap; +} bench_section_t; + +#define BENCH_MAX_SECTIONS 64 +#define BENCH_MAX_KVS_PER_SECTION 256 + +static void bench_section_free(bench_section_t *s) { + free(s->kvs); + s->kvs = NULL; + s->kv_count = 0; + s->kv_cap = 0; +} + +static bool bench_section_add_kv(bench_section_t *s, + const char *k, const char *v) { + if (s->kv_count >= BENCH_MAX_KVS_PER_SECTION) + return false; + + if (s->kv_count >= s->kv_cap) { + int new_cap = s->kv_cap ? s->kv_cap * 2 : 16; + if (new_cap > BENCH_MAX_KVS_PER_SECTION) + new_cap = BENCH_MAX_KVS_PER_SECTION; + bench_kv_t *nk = realloc(s->kvs, (size_t)new_cap * sizeof(bench_kv_t)); + if (!nk) + return false; + s->kvs = nk; + s->kv_cap = new_cap; + } + + (void)strlcpy(s->kvs[s->kv_count].key, k, sizeof(s->kvs[0].key)); + (void)strlcpy(s->kvs[s->kv_count].value, v, sizeof(s->kvs[0].value)); + s->kv_count++; + return true; +} + +static const char *bench_section_get(const bench_section_t *s, const char *k) { + for (int i = 0; i < s->kv_count; i++) { + if (strcasecmp(s->kvs[i].key, k) == 0) + return s->kvs[i].value; + } + return NULL; +} + +static void bench_section_set(bench_section_t *s, const char *k, const char *v) { + for (int i = 0; i < s->kv_count; i++) { + if (strcasecmp(s->kvs[i].key, k) == 0) { + (void)strlcpy(s->kvs[i].value, v, sizeof(s->kvs[i].value)); + return; + } + } + bench_section_add_kv(s, k, v); +} + +// Parse benchmarking.ini into sections array. +// sections must be BENCH_MAX_SECTIONS elements; returns number of sections. +static int bench_parse_ini(const char *content, bench_section_t *sections, + int max_sections) { + if (!content || !sections || max_sections <= 0) + return 0; + + char *buf = strdup(content); + if (!buf) + return 0; + + int count = 0; + int cur = -1; // current section index + + char *line, *saveptr = NULL; + for (line = strtok_r(buf, "\n", &saveptr); line; + line = strtok_r(NULL, "\n", &saveptr)) { + line = bench_trim(line); + if (line[0] == '#' || line[0] == ';' || line[0] == '\0') + continue; + + if (line[0] == '[') { + // Section header + char *end = strchr(line, ']'); + if (!end) + continue; + *end = '\0'; + const char *name = line + 1; + + if (count >= max_sections) { + cur = -1; + continue; + } + memset(§ions[count], 0, sizeof(sections[0])); + (void)strlcpy(sections[count].name, name, sizeof(sections[0].name)); + cur = count; + count++; + continue; + } + + if (cur < 0) + continue; + + char *eq = strchr(line, '='); + if (!eq) + continue; + *eq = '\0'; + char *k = bench_trim(line); + char *v = bench_trim(eq + 1); + bench_section_add_kv(§ions[cur], k, v); + } + + free(buf); + return count; +} + +// Serialize sections back to a buffer. Returns malloc'd string; caller frees. +static char *bench_serialize_ini(const bench_section_t *sections, int count) { + // Estimate size + size_t sz = 128; + for (int i = 0; i < count; i++) { + sz += 4 + strlen(sections[i].name) + 4; + for (int j = 0; j < sections[i].kv_count; j++) + sz += strlen(sections[i].kvs[j].key) + 3 + + strlen(sections[i].kvs[j].value) + 2; + } + + char *buf = malloc(sz); + if (!buf) + return NULL; + + int pos = 0; + pos += snprintf(buf + pos, sz - (size_t)pos, + "# ShadowMount PFS bench state - auto-generated\n"); + + for (int i = 0; i < count; i++) { + pos += snprintf(buf + pos, sz - (size_t)pos, + "\n[%s]\n", sections[i].name); + for (int j = 0; j < sections[i].kv_count; j++) { + pos += snprintf(buf + pos, sz - (size_t)pos, + "%s=%s\n", + sections[i].kvs[j].key, + sections[i].kvs[j].value); + } + } + return buf; +} + +// Find section index for image_basename, or -1. +static int bench_find_section(const bench_section_t *sections, int count, + const char *name) { + for (int i = 0; i < count; i++) { + if (strcasecmp(sections[i].name, name) == 0) + return i; + } + return -1; +} + +int bench_load_results(const char *image_basename, + bench_result_t *results_out, int max_count, + int *next_to_bench_out, bool *bench_complete_out) { + if (next_to_bench_out) + *next_to_bench_out = 0; + if (bench_complete_out) + *bench_complete_out = false; + if (!image_basename || !results_out || max_count <= 0) + return 0; + + size_t len = 0; + char *content = slurp_file(bench_ini_path(), &len); + if (!content) + return 0; + + bench_section_t sections[BENCH_MAX_SECTIONS]; + memset(sections, 0, sizeof(sections)); + int nsec = bench_parse_ini(content, sections, BENCH_MAX_SECTIONS); + free(content); + + int sec_idx = bench_find_section(sections, nsec, image_basename); + if (sec_idx < 0) { + for (int i = 0; i < nsec; i++) + bench_section_free(§ions[i]); + return 0; + } + + bench_section_t *sec = §ions[sec_idx]; + + const char *v; + + if (next_to_bench_out) { + v = bench_section_get(sec, "bench_next"); + if (v) + *next_to_bench_out = (int)strtol(v, NULL, 10); + } + if (bench_complete_out) { + v = bench_section_get(sec, "bench_done"); + if (v) + *bench_complete_out = (strtol(v, NULL, 10) != 0); + } + + // Load results + int loaded = 0; + for (int i = 0; i < max_count; i++) { + char key[32]; + snprintf(key, sizeof(key), "result_%d", i); + v = bench_section_get(sec, key); + if (!v) + break; + if (parse_bench_result(v, &results_out[loaded])) { + // Load the profile from profile_N + char pk[32]; + snprintf(pk, sizeof(pk), "profile_%d", i); + const char *pv = bench_section_get(sec, pk); + if (pv) + (void)parse_profile_from_cache(pv, &results_out[loaded].profile); + loaded++; + } + } + + for (int i = 0; i < nsec; i++) + bench_section_free(§ions[i]); + + return loaded; +} + +bool bench_save_result(const char *image_basename, + int profile_idx, const bench_result_t *result, + int total_count, bool bench_complete, int best_idx) { + if (!image_basename || !result) + return false; + + // Load existing content + size_t len = 0; + char *content = slurp_file(bench_ini_path(), &len); + + bench_section_t sections[BENCH_MAX_SECTIONS]; + memset(sections, 0, sizeof(sections)); + int nsec = 0; + + if (content) { + nsec = bench_parse_ini(content, sections, BENCH_MAX_SECTIONS); + free(content); + content = NULL; + } + + // Find or create section for this image + int sec_idx = bench_find_section(sections, nsec, image_basename); + if (sec_idx < 0) { + if (nsec >= BENCH_MAX_SECTIONS) { + for (int i = 0; i < nsec; i++) + bench_section_free(§ions[i]); + return false; + } + memset(§ions[nsec], 0, sizeof(sections[0])); + (void)strlcpy(sections[nsec].name, image_basename, + sizeof(sections[0].name)); + sec_idx = nsec; + nsec++; + } + + bench_section_t *sec = §ions[sec_idx]; + + // Store result + char result_str[BENCH_LINE_LEN]; + format_bench_result(result, result_str, sizeof(result_str)); + char key[32]; + snprintf(key, sizeof(key), "result_%d", profile_idx); + bench_section_set(sec, key, result_str); + + // Store corresponding profile + char pk[32]; + snprintf(pk, sizeof(pk), "profile_%d", profile_idx); + char profile_str[256]; + format_profile_for_cache(&result->profile, profile_str, sizeof(profile_str)); + bench_section_set(sec, pk, profile_str); + + // Update meta-keys + char tmp[32]; + snprintf(tmp, sizeof(tmp), "%d", total_count); + bench_section_set(sec, "bench_total", tmp); + snprintf(tmp, sizeof(tmp), "%d", bench_complete ? total_count : profile_idx + 1); + bench_section_set(sec, "bench_next", tmp); + snprintf(tmp, sizeof(tmp), "%d", bench_complete ? 1 : 0); + bench_section_set(sec, "bench_done", tmp); + snprintf(tmp, sizeof(tmp), "%d", best_idx); + bench_section_set(sec, "best", tmp); + + // Serialize and write + char *new_content = bench_serialize_ini(sections, nsec); + bool ok = false; + if (new_content) { + ok = atomic_write_file(bench_ini_path(), new_content, strlen(new_content)); + free(new_content); + } + + for (int i = 0; i < nsec; i++) + bench_section_free(§ions[i]); + + return ok; +} + +// --------------------------------------------------------------------------- +// Analysis + +int bench_find_best(const bench_result_t *results, int count) { + int best = -1; + uint32_t best_score = 0; + + for (int i = 0; i < count; i++) { + if (!results[i].mount_ok || results[i].score_ms == 0) + continue; + if (best < 0 || results[i].score_ms < best_score) { + best = i; + best_score = results[i].score_ms; + } + } + return best; +} + +void bench_log_report(const char *image_basename, + const bench_result_t *results, int count, int best_idx) { + if (!image_basename || !results || count <= 0) + return; + + log_debug(" [BENCH] ===== report for %s =====", image_basename); + log_debug(" [BENCH] profiles tested: %d best_idx: %d", count, best_idx); + + for (int i = 0; i < count; i++) { + const bench_result_t *r = &results[i]; + char profile_buf[160]; + mount_profile_format_compact(&r->profile, profile_buf, sizeof(profile_buf)); + log_debug( + " [BENCH] [%d%s] profile=(%s) score=%u mount_ms=%u " + "dirlist_ms=%u(f=%u,d=%u) eboot_ms=%u(%lluB,gap=%u) " + "param_ms=%u(%lluB,gap=%u) slow=%u fail=%d", + i, (i == best_idx) ? "*" : " ", + profile_buf, r->score_ms, r->mount_ms, + r->dirlist_ms, r->dirlist_files, r->dirlist_dirs, + r->eboot_read_ms, (unsigned long long)r->eboot_bytes, r->eboot_max_gap_ms, + r->param_read_ms, (unsigned long long)r->param_bytes, r->param_max_gap_ms, + r->slow_reads, r->any_failed ? 1 : 0); + } + + if (best_idx >= 0 && best_idx < count) { + char best_prof[160]; + mount_profile_format_compact(&results[best_idx].profile, best_prof, sizeof(best_prof)); + log_debug(" [BENCH] best profile for %s: (%s) score=%u", + image_basename, best_prof, results[best_idx].score_ms); + } else { + log_debug(" [BENCH] no valid profiles found for %s", image_basename); + } +} diff --git a/src/sm_brute_force.c b/src/sm_brute_force.c new file mode 100644 index 0000000..8b025c9 --- /dev/null +++ b/src/sm_brute_force.c @@ -0,0 +1,215 @@ +#include "sm_brute_force.h" + +#include +#include +#include +#include + +#include "sm_log.h" +#include "sm_mount_defs.h" + +// Stage A: Fast-track candidates (high probability) +static const uint16_t STAGE_A_IMAGE_TYPES[] = {0, 5, 7}; +static const uint16_t STAGE_A_RAW_FLAGS[] = {0x9, 0x8}; +static const uint32_t STAGE_A_SECTOR_SIZES[] = {4096, 32768}; +static const char *STAGE_A_FSTYPES[] = {"pfs", "ppr_pfs"}; + +// Stage B: Expanded candidates (when Stage A fails) +static const uint16_t STAGE_B_IMAGE_TYPES[] = {1, 2, 3, 4, 6, 8, 9, 10, 11, 12, 0xA, 0xB, 0xC}; +static const uint16_t STAGE_B_RAW_FLAGS[] = {0xD, 0xC}; +static const uint32_t STAGE_B_SECTOR_SIZES[] = {65536, 16384, 8192}; +static const char *STAGE_B_FSTYPES[] = {"transaction_pfs"}; + +#define ARRAY_COUNT(arr) (sizeof(arr) / sizeof(arr[0])) + +void brute_attempt_state_init(brute_attempt_state_t *state, + uint32_t max_attempts, + uint32_t max_seconds_per_image) { + if (!state) + return; + memset(state, 0, sizeof(*state)); + state->max_attempts = max_attempts > 0 ? max_attempts : 20; + state->max_seconds = max_seconds_per_image > 0 ? max_seconds_per_image : 60; + state->start_time = time(NULL); + state->should_stop = false; +} + +bool brute_should_continue(const brute_attempt_state_t *state) { + if (!state || state->should_stop) + return false; + + if (state->total_attempts >= state->max_attempts) { + log_debug(" [IMG][BRUTE] reached max attempts limit (%u)", state->max_attempts); + return false; + } + + time_t now = time(NULL); + uint32_t elapsed = (uint32_t)(now - state->start_time); + if (elapsed >= state->max_seconds) { + log_debug(" [IMG][BRUTE] reached max time limit (%u seconds)", state->max_seconds); + return false; + } + + return true; +} + +bool brute_record_attempt(brute_attempt_state_t *state, + brute_force_result_t result) { + if (!state) + return false; + + (void)result; + + state->total_attempts++; + return brute_should_continue(state); +} + +int brute_generate_pfs_candidates(const char *image_path, + image_fs_type_t fs_type, + bool mount_read_only, + int stage, + mount_profile_t *candidates, + int max_count) { + if (!image_path || !candidates || max_count <= 0) + return 0; + + (void)fs_type; + + const uint16_t *image_types = NULL; + size_t image_types_count = 0; + const uint16_t *raw_flags = NULL; + size_t raw_flags_count = 0; + const uint32_t *sector_sizes = NULL; + size_t sector_sizes_count = 0; + const char **fstypes = NULL; + size_t fstypes_count = 0; + + if (stage == 0) { + // Stage A: Fast track + image_types = STAGE_A_IMAGE_TYPES; + image_types_count = ARRAY_COUNT(STAGE_A_IMAGE_TYPES); + raw_flags = STAGE_A_RAW_FLAGS; + raw_flags_count = ARRAY_COUNT(STAGE_A_RAW_FLAGS); + sector_sizes = STAGE_A_SECTOR_SIZES; + sector_sizes_count = ARRAY_COUNT(STAGE_A_SECTOR_SIZES); + fstypes = STAGE_A_FSTYPES; + fstypes_count = ARRAY_COUNT(STAGE_A_FSTYPES); + } else { + // Stage B: Expanded set + image_types = STAGE_B_IMAGE_TYPES; + image_types_count = ARRAY_COUNT(STAGE_B_IMAGE_TYPES); + raw_flags = STAGE_B_RAW_FLAGS; + raw_flags_count = ARRAY_COUNT(STAGE_B_RAW_FLAGS); + sector_sizes = STAGE_B_SECTOR_SIZES; + sector_sizes_count = ARRAY_COUNT(STAGE_B_SECTOR_SIZES); + fstypes = STAGE_B_FSTYPES; + fstypes_count = ARRAY_COUNT(STAGE_B_FSTYPES); + } + + int candidate_count = 0; + + // Generate all combinations in priority order + for (size_t i_img = 0; i_img < image_types_count && candidate_count < max_count; i_img++) { + for (size_t i_raw = 0; i_raw < raw_flags_count && candidate_count < max_count; i_raw++) { + for (size_t i_sec = 0; i_sec < sector_sizes_count && candidate_count < max_count; i_sec++) { + for (size_t i_fs = 0; i_fs < fstypes_count && candidate_count < max_count; i_fs++) { + mount_profile_t *profile = &candidates[candidate_count]; + memset(profile, 0, sizeof(*profile)); + + profile->io_version = LVD_ATTACH_IO_VERSION_V0; + profile->image_type = image_types[i_img]; + profile->raw_flags = raw_flags[i_raw]; + profile->sector_size = sector_sizes[i_sec]; + profile->secondary_unit = sector_sizes[i_sec]; + profile->fstype = fstypes[i_fs]; + profile->budgetid = DEVPFS_BUDGET_GAME; + profile->mkeymode = DEVPFS_MKEYMODE_SD; + profile->sigverify = (PFS_MOUNT_SIGVERIFY != 0) ? 1u : 0u; + profile->playgo = (PFS_MOUNT_PLAYGO != 0) ? 1u : 0u; + profile->disc = (PFS_MOUNT_DISC != 0) ? 1u : 0u; + profile->mount_read_only = mount_read_only; + + // Normalize raw_flags + if ((profile->raw_flags & 0x800Eu) != 0u) { + uint32_t raw = (uint32_t)profile->raw_flags; + uint32_t len = (raw & 0xFFFF8000u) + ((raw & 2u) << 6) + + (8u * (raw & 1u)) + (2u * ((raw >> 2) & 1u)) + + (2u * (raw & 8u)) + 4u; + profile->normalized_flags = (uint16_t)len; + } else { + profile->normalized_flags = (uint16_t)(8u * ((uint32_t)profile->raw_flags & 1u) + 4u); + } + + profile->label = NULL; + + candidate_count++; + } + } + } + } + + return candidate_count; +} + +void brute_log_attempt(const char *image_path, + uint32_t attempt_index, + uint32_t total_attempts, + const mount_profile_t *profile, + brute_force_result_t result, + int errno_value) { + if (!image_path || !profile) + return; + + const char *result_str = "UNKNOWN"; + switch (result) { + case BRUTE_RESULT_SUCCESS: + result_str = "OK"; + break; + case BRUTE_RESULT_ATTACH_FAILED: + result_str = "ATTACH_FAILED"; + break; + case BRUTE_RESULT_NMOUNT_FAILED: + result_str = "NMOUNT_FAILED"; + break; + case BRUTE_RESULT_VALIDATION_FAILED: + result_str = "VALIDATION_FAILED"; + break; + case BRUTE_RESULT_CLEANUP_FAILED: + result_str = "CLEANUP_FAILED"; + break; + case BRUTE_RESULT_TIMEOUT: + result_str = "TIMEOUT"; + break; + case BRUTE_RESULT_ABORT: + result_str = "ABORT"; + break; + } + + char profile_buf[160]; + mount_profile_format_compact(profile, profile_buf, sizeof(profile_buf)); + + log_debug( + " [IMG][BRUTE] attempt=%u/%u result=%s errno=%d profile=(img=%u " + "raw=0x%x flags=0x%x sec=%u fstype=%s)", + attempt_index, total_attempts, result_str, errno_value, profile->image_type, + profile->raw_flags, profile->normalized_flags, profile->sector_size, + profile->fstype); +} + +void brute_log_success(const char *image_path, + const mount_profile_t *profile) { + if (!image_path || !profile) + return; + + char profile_buf[160]; + mount_profile_format_compact(profile, profile_buf, sizeof(profile_buf)); + + log_debug(" [IMG][BRUTE] profile selected and cached: %s", profile_buf); +} + +void brute_log_exhausted(const char *image_path) { + if (!image_path) + return; + + log_debug(" [IMG][BRUTE] all profiles failed, moving to next image"); +} diff --git a/src/sm_config_mount.c b/src/sm_config_mount.c index 1909f23..e1c50f8 100644 --- a/src/sm_config_mount.c +++ b/src/sm_config_mount.c @@ -257,6 +257,27 @@ static void init_runtime_config_defaults(runtime_config_state_t *state) { state->cfg.md_sector_exfat = MD_SECTOR_SIZE_EXFAT; state->cfg.md_sector_ufs = MD_SECTOR_SIZE_UFS; state->cfg.md_sector_zfs = MD_SECTOR_SIZE_ZFS; + // Adaptive mount strategy (brute-force) defaults + state->cfg.pfs_bruteforce_enabled = true; + state->cfg.pfs_bruteforce_sleep_ms = 200u; + state->cfg.pfs_bruteforce_max_attempts = 20u; + state->cfg.pfs_bruteforce_max_seconds_per_image = 60u; + state->cfg.pfs_bruteforce_cooldown_seconds = 300u; + state->cfg.pfs_bruteforce_max_global_attempts_per_scan = 200u; + (void)strlcpy(state->cfg.pfs_bruteforce_sector_sizes, "4096", + sizeof(state->cfg.pfs_bruteforce_sector_sizes)); + (void)strlcpy(state->cfg.pfs_bruteforce_image_types, "0,5,2,3,4,6,1,7", + sizeof(state->cfg.pfs_bruteforce_image_types)); + (void)strlcpy(state->cfg.pfs_bruteforce_raw_flags, "0x9,0x8,0xD,0xC", + sizeof(state->cfg.pfs_bruteforce_raw_flags)); + (void)strlcpy(state->cfg.pfs_bruteforce_fstypes, + "pfs,ppr_pfs,transaction_pfs", + sizeof(state->cfg.pfs_bruteforce_fstypes)); + // Probe + benchmark defaults (both disabled; no overhead unless opted in) + state->cfg.pfs_probe_enabled = false; + state->cfg.pfs_bench_enabled = false; + state->cfg.pfs_bench_min_read_bytes = 65536u; + state->cfg.pfs_bench_delay_threshold_ms = 500u; memset(state->image_mode_rules, 0, sizeof(state->image_mode_rules)); clear_kstuff_title_rules(state); init_runtime_scan_paths_defaults(state); @@ -1306,6 +1327,123 @@ static config_load_status_t load_runtime_config_state(runtime_config_state_t *st continue; } + // Brute-force configuration keys + if (strcasecmp(key, "pfs_bruteforce_enabled") == 0) { + if (!parse_bool_ini(value, &bval)) { + log_debug(" [CFG] invalid bool at line %d: %s=%s", line_no, key, value); + continue; + } + state->cfg.pfs_bruteforce_enabled = bval; + continue; + } + + if (strcasecmp(key, "pfs_bruteforce_sleep_ms") == 0) { + if (!parse_u32_ini(value, &u32)) { + log_debug(" [CFG] invalid uint32 at line %d: %s=%s", line_no, key, value); + continue; + } + state->cfg.pfs_bruteforce_sleep_ms = u32; + continue; + } + + if (strcasecmp(key, "pfs_bruteforce_max_attempts") == 0) { + if (!parse_u32_ini(value, &u32) || u32 == 0) { + log_debug(" [CFG] invalid attempts at line %d: %s=%s (must be > 0)", line_no, key, value); + continue; + } + state->cfg.pfs_bruteforce_max_attempts = u32; + continue; + } + + if (strcasecmp(key, "pfs_bruteforce_max_seconds_per_image") == 0) { + if (!parse_u32_ini(value, &u32) || u32 == 0) { + log_debug(" [CFG] invalid timeout at line %d: %s=%s (must be > 0)", line_no, key, value); + continue; + } + state->cfg.pfs_bruteforce_max_seconds_per_image = u32; + continue; + } + + if (strcasecmp(key, "pfs_bruteforce_cooldown_seconds") == 0) { + if (!parse_u32_ini(value, &u32)) { + log_debug(" [CFG] invalid cooldown at line %d: %s=%s", line_no, key, + value); + continue; + } + state->cfg.pfs_bruteforce_cooldown_seconds = u32; + continue; + } + + if (strcasecmp(key, "pfs_bruteforce_max_global_attempts") == 0) { + if (!parse_u32_ini(value, &u32) || u32 == 0) { + log_debug(" [CFG] invalid global attempts at line %d: %s=%s", + line_no, key, value); + continue; + } + state->cfg.pfs_bruteforce_max_global_attempts_per_scan = u32; + continue; + } + + if (strcasecmp(key, "pfs_bruteforce_sector_sizes") == 0) { + (void)strlcpy(state->cfg.pfs_bruteforce_sector_sizes, value, + sizeof(state->cfg.pfs_bruteforce_sector_sizes)); + continue; + } + + if (strcasecmp(key, "pfs_bruteforce_image_types") == 0) { + (void)strlcpy(state->cfg.pfs_bruteforce_image_types, value, + sizeof(state->cfg.pfs_bruteforce_image_types)); + continue; + } + + if (strcasecmp(key, "pfs_bruteforce_raw_flags") == 0) { + (void)strlcpy(state->cfg.pfs_bruteforce_raw_flags, value, + sizeof(state->cfg.pfs_bruteforce_raw_flags)); + continue; + } + + if (strcasecmp(key, "pfs_bruteforce_fstypes") == 0) { + (void)strlcpy(state->cfg.pfs_bruteforce_fstypes, value, + sizeof(state->cfg.pfs_bruteforce_fstypes)); + continue; + } + + if (strcasecmp(key, "pfs_probe_enabled") == 0) { + if (!parse_bool_ini(value, &bval)) { + log_debug(" [CFG] invalid bool at line %d: %s=%s", line_no, key, value); + continue; + } + state->cfg.pfs_probe_enabled = bval; + continue; + } + + if (strcasecmp(key, "pfs_bench_enabled") == 0) { + if (!parse_bool_ini(value, &bval)) { + log_debug(" [CFG] invalid bool at line %d: %s=%s", line_no, key, value); + continue; + } + state->cfg.pfs_bench_enabled = bval; + continue; + } + + if (strcasecmp(key, "pfs_bench_min_read_bytes") == 0) { + if (!parse_u32_ini(value, &u32)) { + log_debug(" [CFG] invalid uint32 at line %d: %s=%s", line_no, key, value); + continue; + } + state->cfg.pfs_bench_min_read_bytes = u32; + continue; + } + + if (strcasecmp(key, "pfs_bench_delay_threshold_ms") == 0) { + if (!parse_u32_ini(value, &u32)) { + log_debug(" [CFG] invalid uint32 at line %d: %s=%s", line_no, key, value); + continue; + } + state->cfg.pfs_bench_delay_threshold_ms = u32; + continue; + } + bool is_sector_key = (strcasecmp(key, "lvd_exfat_sector_size") == 0) || (strcasecmp(key, "lvd_ufs_sector_size") == 0) || @@ -1377,7 +1515,10 @@ static config_load_status_t load_runtime_config_state(runtime_config_state_t *st "exfat_backend=%s ufs_backend=%s zfs_backend=%s " "lvd_sec(exfat=%u ufs=%u zfs=%u pfs=%u) md_sec(exfat=%u ufs=%u zfs=%u) " "scan_interval_s=%u stability_wait_s=%u scan_paths=%d image_rules=%d " - "kstuff_no_pause=%d kstuff_delay_rules=%d", + "kstuff_no_pause=%d kstuff_delay_rules=%d " + "pfs_bruteforce_enabled=%d sleep_ms=%u max_attempts=%u max_s=%u " + "cooldown_s=%u max_global=%u " + "pfs_probe=%d pfs_bench=%d bench_min_read=%u bench_delay_ms=%u", state->cfg.debug_enabled ? 1 : 0, state->cfg.quiet_mode ? 1 : 0, state->cfg.mount_read_only ? 1 : 0, state->cfg.force_mount ? 1 : 0, state->cfg.scan_depth, @@ -1396,7 +1537,17 @@ static config_load_status_t load_runtime_config_state(runtime_config_state_t *st state->cfg.md_sector_zfs, state->cfg.scan_interval_us / 1000000u, state->cfg.stability_wait_seconds, state->scan_path_count, image_rule_count, state->kstuff_no_pause_title_count, - kstuff_delay_rule_count); + kstuff_delay_rule_count, + state->cfg.pfs_bruteforce_enabled ? 1 : 0, + state->cfg.pfs_bruteforce_sleep_ms, + state->cfg.pfs_bruteforce_max_attempts, + state->cfg.pfs_bruteforce_max_seconds_per_image, + state->cfg.pfs_bruteforce_cooldown_seconds, + state->cfg.pfs_bruteforce_max_global_attempts_per_scan, + state->cfg.pfs_probe_enabled ? 1 : 0, + state->cfg.pfs_bench_enabled ? 1 : 0, + state->cfg.pfs_bench_min_read_bytes, + state->cfg.pfs_bench_delay_threshold_ms); return CONFIG_LOAD_OK; } diff --git a/src/sm_image.c b/src/sm_image.c index 83bd8fe..4a2204a 100644 --- a/src/sm_image.c +++ b/src/sm_image.c @@ -13,6 +13,11 @@ #include "sm_path_state.h" #include "sm_path_utils.h" #include "sm_paths.h" +#include "sm_mount_profile.h" +#include "sm_brute_force.h" +#include "sm_mount_cache.h" +#include "sm_bench.h" +#include "sm_time.h" static uint32_t get_lvd_sector_size_fallback(image_fs_type_t fs_type) { const runtime_config_t *cfg = runtime_config(); @@ -757,6 +762,1024 @@ static bool validate_mounted_image(const char *file_path, image_fs_type_t fs_typ return true; } +// --- Brute-Force Mount Strategy (PFS two-stage solver) --- +typedef struct { + uint16_t image_type; + uint16_t raw_flags; + uint16_t normalized_flags; + uint32_t sector_size; + uint32_t secondary_unit; +} pfs_attach_tuple_t; + +typedef struct { + const char *fstype; + const char *budgetid; + const char *mkeymode; + uint8_t sigverify; + uint8_t playgo; + uint8_t disc; + bool include_ekpfs; + bool supports_noatime; + uint8_t key_level; +} pfs_nmount_profile_t; + +typedef struct { + const uint16_t *image_types; + size_t image_type_count; + const uint16_t *raw_flags; + size_t raw_flag_count; + const uint32_t *sector_sizes; + size_t sector_size_count; + bool force_secondary_65536; + const char *label; +} pfs_attach_pass_t; + +typedef struct { + const runtime_config_t *cfg; + const char *file_path; + image_fs_type_t fs_type; + off_t file_size; + const char *mount_point; + bool mount_read_only; + bool force_mount; + time_t start_time; + uint32_t attempt_idx; + bool limit_logged; + uint32_t attach_einval_count; + uint32_t nmount_enoent_count; + uint32_t nmount_einval_count; + uint32_t nmount_semantic_count; + uint32_t other_fail_count; +} pfs_bruteforce_state_t; + +typedef struct { + char path[MAX_PATH]; + time_t cooldown_until; + bool cooldown_logged; + bool valid; +} pfs_cooldown_entry_t; + +#define PFS_COOLDOWN_CAPACITY 64 +static pfs_cooldown_entry_t g_pfs_cooldowns[PFS_COOLDOWN_CAPACITY]; +static time_t g_pfs_global_attempt_window = 0; +static uint32_t g_pfs_global_attempts = 0; + +static pfs_cooldown_entry_t *find_or_create_pfs_cooldown(const char *path) { + for (int i = 0; i < PFS_COOLDOWN_CAPACITY; i++) { + if (!g_pfs_cooldowns[i].valid) + continue; + if (strcmp(g_pfs_cooldowns[i].path, path) == 0) + return &g_pfs_cooldowns[i]; + } + for (int i = 0; i < PFS_COOLDOWN_CAPACITY; i++) { + if (g_pfs_cooldowns[i].valid) + continue; + memset(&g_pfs_cooldowns[i], 0, sizeof(g_pfs_cooldowns[i])); + g_pfs_cooldowns[i].valid = true; + (void)strlcpy(g_pfs_cooldowns[i].path, path, sizeof(g_pfs_cooldowns[i].path)); + return &g_pfs_cooldowns[i]; + } + return &g_pfs_cooldowns[0]; +} + +static bool is_pfs_cooldown_active(const char *path, time_t *remaining_out) { + if (remaining_out) + *remaining_out = 0; + for (int i = 0; i < PFS_COOLDOWN_CAPACITY; i++) { + if (!g_pfs_cooldowns[i].valid) + continue; + if (strcmp(g_pfs_cooldowns[i].path, path) != 0) + continue; + time_t now = time(NULL); + if (g_pfs_cooldowns[i].cooldown_until <= now) { + g_pfs_cooldowns[i].cooldown_logged = false; + return false; + } + if (remaining_out) + *remaining_out = g_pfs_cooldowns[i].cooldown_until - now; + if (!g_pfs_cooldowns[i].cooldown_logged) { + log_debug(" [IMG][BRUTE] cooldown active (%lds), skip heavy search: %s", + (long)(g_pfs_cooldowns[i].cooldown_until - now), path); + g_pfs_cooldowns[i].cooldown_logged = true; + } + return true; + } + return false; +} + +static void set_pfs_cooldown(const char *path, uint32_t seconds) { + pfs_cooldown_entry_t *entry = find_or_create_pfs_cooldown(path); + if (!entry) + return; + entry->cooldown_until = time(NULL) + (time_t)seconds; + entry->cooldown_logged = false; +} + +static bool stage_a_attach_tuple(const char *file_path, off_t file_size, + const pfs_attach_tuple_t *tuple, + int *unit_id_out, char *devname_out, + size_t devname_size, int *errno_out) { + if (errno_out) + *errno_out = 0; + + lvd_ioctl_layer_v0_t layers[LVD_ATTACH_LAYER_COUNT]; + memset(layers, 0, sizeof(layers)); + layers[0].source_type = get_lvd_source_type(file_path); + layers[0].flags = LVD_ENTRY_FLAG_NO_BITMAP; + layers[0].path = file_path; + layers[0].offset = 0; + layers[0].size = (uint64_t)file_size; + + lvd_ioctl_attach_v0_t req; + memset(&req, 0, sizeof(req)); + req.io_version = LVD_ATTACH_IO_VERSION_V0; + req.image_type = tuple->image_type; + req.layer_count = LVD_ATTACH_LAYER_COUNT; + req.device_size = (uint64_t)file_size; + req.layers_ptr = layers; + req.sector_size = tuple->sector_size; + req.secondary_unit = tuple->secondary_unit; + req.flags = tuple->normalized_flags; + req.device_id = -1; + + int fd = open(LVD_CTRL_PATH, O_RDWR); + if (fd < 0) { + if (errno_out) + *errno_out = errno; + return false; + } + + int ret = ioctl(fd, SCE_LVD_IOC_ATTACH_V0, &req); + int saved_errno = (ret == 0) ? 0 : errno; + close(fd); + if (ret != 0 || req.device_id < 0) { + if (errno_out) + *errno_out = (saved_errno != 0) ? saved_errno : EINVAL; + return false; + } + + snprintf(devname_out, devname_size, "/dev/lvd%d", req.device_id); + if (!wait_for_dev_node_state(devname_out, true)) { + (void)detach_attached_unit(ATTACH_BACKEND_LVD, req.device_id); + if (errno_out) + *errno_out = ETIMEDOUT; + return false; + } + + *unit_id_out = req.device_id; + return true; +} + +static bool stage_b_nmount_profile(const char *mount_point, const char *devname, + bool mount_read_only, bool force_mount, + const pfs_nmount_profile_t *np, + bool include_noatime, + char *mount_errmsg, size_t errmsg_size, + int *errno_out) { + if (errno_out) + *errno_out = 0; + if (mount_errmsg && errmsg_size > 0) + mount_errmsg[0] = '\0'; + + // Keep mount paths/devnodes valid across retries so ENOENT is actionable. + if (!devname || devname[0] == '\0' || !path_exists(devname)) { + if (errno_out) + *errno_out = ENOENT; + if (mount_errmsg && errmsg_size > 0) { + (void)snprintf(mount_errmsg, errmsg_size, "missing source device: %s", + (devname && devname[0] != '\0') ? devname : ""); + } + errno = ENOENT; + return false; + } + + if (!mount_point || mount_point[0] == '\0') { + if (errno_out) + *errno_out = ENOENT; + if (mount_errmsg && errmsg_size > 0) + (void)snprintf(mount_errmsg, errmsg_size, "missing mount point"); + errno = ENOENT; + return false; + } + + if (mkdir(IMAGE_MOUNT_BASE, 0777) != 0 && errno != EEXIST) { + int mkerr = errno; + if (errno_out) + *errno_out = mkerr; + if (mount_errmsg && errmsg_size > 0) + (void)snprintf(mount_errmsg, errmsg_size, + "mkdir failed for %s: %s", IMAGE_MOUNT_BASE, + strerror(mkerr)); + errno = mkerr; + return false; + } + + if (mkdir(mount_point, 0777) != 0 && errno != EEXIST) { + int mkerr = errno; + if (errno_out) + *errno_out = mkerr; + if (mount_errmsg && errmsg_size > 0) + (void)snprintf(mount_errmsg, errmsg_size, + "mkdir failed for %s: %s", mount_point, + strerror(mkerr)); + errno = mkerr; + return false; + } + + struct iovec iov[48]; + unsigned int iovlen = 0; + + // Mandatory keys. + iov[iovlen++] = (struct iovec)IOVEC_ENTRY("from"); + iov[iovlen++] = (struct iovec)IOVEC_ENTRY(devname); + iov[iovlen++] = (struct iovec)IOVEC_ENTRY("fspath"); + iov[iovlen++] = (struct iovec)IOVEC_ENTRY(mount_point); + iov[iovlen++] = (struct iovec)IOVEC_ENTRY("fstype"); + iov[iovlen++] = (struct iovec)IOVEC_ENTRY(np->fstype); + + // Stage B profile keys. + if (np->key_level >= 1) { + iov[iovlen++] = (struct iovec)IOVEC_ENTRY("budgetid"); + iov[iovlen++] = (struct iovec)IOVEC_ENTRY(np->budgetid); + } + if (np->key_level >= 2) { + iov[iovlen++] = (struct iovec)IOVEC_ENTRY("mkeymode"); + iov[iovlen++] = (struct iovec)IOVEC_ENTRY(np->mkeymode); + } + if (np->key_level >= 3 || strcmp(np->fstype, "pfs") == 0) { + iov[iovlen++] = (struct iovec)IOVEC_ENTRY("sigverify"); + iov[iovlen++] = (struct iovec)IOVEC_ENTRY(np->sigverify ? "1" : "0"); + iov[iovlen++] = (struct iovec)IOVEC_ENTRY("playgo"); + iov[iovlen++] = (struct iovec)IOVEC_ENTRY(np->playgo ? "1" : "0"); + iov[iovlen++] = (struct iovec)IOVEC_ENTRY("disc"); + iov[iovlen++] = (struct iovec)IOVEC_ENTRY(np->disc ? "1" : "0"); + } + if (strcmp(np->fstype, "pfs") == 0 && np->include_ekpfs) { + iov[iovlen++] = (struct iovec)IOVEC_ENTRY("ekpfs"); + iov[iovlen++] = (struct iovec)IOVEC_ENTRY(PFS_ZERO_EKPFS_KEY_HEX); + } + + iov[iovlen++] = (struct iovec)IOVEC_ENTRY("async"); + iov[iovlen++] = (struct iovec)IOVEC_ENTRY(NULL); + if (include_noatime) { + iov[iovlen++] = (struct iovec)IOVEC_ENTRY("noatime"); + iov[iovlen++] = (struct iovec)IOVEC_ENTRY(NULL); + } + iov[iovlen++] = (struct iovec)IOVEC_ENTRY("automounted"); + iov[iovlen++] = (struct iovec)IOVEC_ENTRY(NULL); + + iov[iovlen++] = (struct iovec)IOVEC_ENTRY("errmsg"); + iov[iovlen].iov_base = (void *)mount_errmsg; + iov[iovlen].iov_len = errmsg_size; + iovlen++; + + if (force_mount) { + iov[iovlen++] = (struct iovec)IOVEC_ENTRY("force"); + iov[iovlen++] = (struct iovec)IOVEC_ENTRY(NULL); + } + + if (nmount(iov, iovlen, mount_read_only ? MNT_RDONLY : 0) == 0) + return true; + + int nmount_errno = errno; + if (errno_out) + *errno_out = nmount_errno; + if (mount_errmsg && errmsg_size > 0 && mount_errmsg[0] == '\0') { + (void)snprintf(mount_errmsg, errmsg_size, "%s (%d)", + strerror(nmount_errno), nmount_errno); + } + return false; +} + +static bool pfs_bruteforce_limits_reached(pfs_bruteforce_state_t *state) { + time_t now = time(NULL); + bool reached = + state->attempt_idx >= state->cfg->pfs_bruteforce_max_attempts || + (uint32_t)(now - state->start_time) >= + state->cfg->pfs_bruteforce_max_seconds_per_image || + g_pfs_global_attempts >= + state->cfg->pfs_bruteforce_max_global_attempts_per_scan; + + if (!reached || state->limit_logged) + return reached; + + log_debug(" [IMG][BRUTE] limits reached: attempts=%u elapsed=%us global=%u", + state->attempt_idx, (unsigned)(now - state->start_time), + g_pfs_global_attempts); + state->limit_logged = true; + return true; +} + +static void pfs_bruteforce_sleep(const pfs_bruteforce_state_t *state) { + if (state->cfg->pfs_bruteforce_sleep_ms > 0) + sceKernelUsleep(state->cfg->pfs_bruteforce_sleep_ms * 1000u); +} + +static void fill_mount_profile_from_tuple(mount_profile_t *profile, + const pfs_attach_tuple_t *tuple, + const pfs_nmount_profile_t *np, + bool supports_noatime, + bool mount_read_only) { + memset(profile, 0, sizeof(*profile)); + profile->io_version = LVD_ATTACH_IO_VERSION_V0; + profile->image_type = tuple->image_type; + profile->raw_flags = tuple->raw_flags; + profile->normalized_flags = tuple->normalized_flags; + profile->sector_size = tuple->sector_size; + profile->secondary_unit = tuple->secondary_unit; + profile->fstype = np->fstype; + profile->budgetid = np->budgetid; + profile->mkeymode = np->mkeymode; + profile->sigverify = np->sigverify; + profile->playgo = np->playgo; + profile->disc = np->disc; + profile->include_ekpfs = np->include_ekpfs; + profile->supports_noatime = supports_noatime; + profile->mount_read_only = mount_read_only; +} + +static void count_attach_failure(pfs_bruteforce_state_t *state, int err) { + if (err == EINVAL) + state->attach_einval_count++; + else + state->other_fail_count++; +} + +static const char *errno_name_short(int err) { + switch (err) { + case 0: + return "OK"; + case ENOENT: + return "ENOENT"; + case EINVAL: + return "EINVAL"; + case EOPNOTSUPP: + return "EOPNOTSUPP"; + case EACCES: + return "EACCES"; + case EPERM: + return "EPERM"; + case ENOMEM: + return "ENOMEM"; + case ETIMEDOUT: + return "ETIMEDOUT"; + case EBUSY: + return "EBUSY"; + case ENXIO: + return "ENXIO"; + case ENODEV: + return "ENODEV"; + case EIO: + return "EIO"; + default: + return "OTHER"; + } +} + +static void count_nmount_failure(pfs_bruteforce_state_t *state, int err) { + if (err == ENOENT) + state->nmount_enoent_count++; + else if (err == EINVAL) + state->nmount_einval_count++; + else if (err == EOPNOTSUPP) + state->nmount_semantic_count++; + else + state->other_fail_count++; +} + +static bool pfs_try_nmount_profile(pfs_bruteforce_state_t *state, + const pfs_attach_tuple_t *tuple, + const pfs_nmount_profile_t *np, + int *unit_id_io, char *devname, + size_t devname_size, + mount_profile_t *winner_out) { + bool include_noatime = np->supports_noatime; + bool suppress_nmount_count = false; + char errmsg[256]; + int nmount_err = 0; + bool ok = stage_b_nmount_profile(state->mount_point, devname, + state->mount_read_only, + state->force_mount, np, include_noatime, + errmsg, sizeof(errmsg), &nmount_err); + g_pfs_global_attempts++; + log_debug(" [IMG][BRUTE] stage=B idx=%u tuple=(img=%u raw=0x%x sec=%u sec2=%u) opts=(fstype=%s budget=%s mkey=%s sig=%u playgo=%u disc=%u ekpfs=%d noatime=%d) result=%s errno=%d(%s)%s%s", + state->attempt_idx, tuple->image_type, tuple->raw_flags, + tuple->sector_size, tuple->secondary_unit, np->fstype, + np->budgetid ? np->budgetid : "-", + np->mkeymode ? np->mkeymode : "-", np->sigverify, np->playgo, + np->disc, np->include_ekpfs ? 1 : 0, include_noatime ? 1 : 0, + ok ? "NMOUNT_OK" : "NMOUNT_FAIL", nmount_err, + errno_name_short(nmount_err), + (!ok && errmsg[0]) ? " msg=" : "", + (!ok && errmsg[0]) ? errmsg : ""); + state->attempt_idx++; + + if (!ok && nmount_err == EINVAL && include_noatime) { + int retry_err = 0; + bool retry_ok = stage_b_nmount_profile(state->mount_point, devname, + state->mount_read_only, + state->force_mount, np, false, + errmsg, sizeof(errmsg), &retry_err); + g_pfs_global_attempts++; + log_debug(" [IMG][BRUTE] stage=B idx=%u retry=(drop-noatime) tuple=(img=%u raw=0x%x sec=%u sec2=%u) opts=(fstype=%s budget=%s mkey=%s sig=%u playgo=%u disc=%u ekpfs=%d noatime=0) result=%s errno=%d(%s)%s%s", + state->attempt_idx, tuple->image_type, tuple->raw_flags, + tuple->sector_size, tuple->secondary_unit, np->fstype, + np->budgetid ? np->budgetid : "-", + np->mkeymode ? np->mkeymode : "-", np->sigverify, np->playgo, + np->disc, np->include_ekpfs ? 1 : 0, + retry_ok ? "NMOUNT_OK" : "NMOUNT_FAIL", retry_err, + errno_name_short(retry_err), + (!retry_ok && errmsg[0]) ? " msg=" : "", + (!retry_ok && errmsg[0]) ? errmsg : ""); + state->attempt_idx++; + + ok = retry_ok; + nmount_err = retry_err; + include_noatime = false; + } + + if (!ok && nmount_err == ENOENT) { + int reattach_err = 0; + if (*unit_id_io >= 0) + (void)detach_attached_unit(ATTACH_BACKEND_LVD, *unit_id_io); + *unit_id_io = -1; + if (devname_size > 0) + devname[0] = '\0'; + + bool reattached = stage_a_attach_tuple(state->file_path, state->file_size, + tuple, unit_id_io, devname, + devname_size, &reattach_err); + g_pfs_global_attempts++; + log_debug(" [IMG][BRUTE] stage=A idx=%u retry=(reattach-after-enoent) tuple=(img=%u raw=0x%x flags=0x%x sec=%u sec2=%u) result=%s errno=%d(%s)", + state->attempt_idx, tuple->image_type, tuple->raw_flags, + tuple->normalized_flags, tuple->sector_size, + tuple->secondary_unit, reattached ? "ATTACH_OK" : "ATTACH_FAIL", + reattach_err, errno_name_short(reattach_err)); + state->attempt_idx++; + + if (!reattached) { + count_attach_failure(state, reattach_err); + suppress_nmount_count = true; + } else { + int retry_err = 0; + bool retry_ok = stage_b_nmount_profile(state->mount_point, devname, + state->mount_read_only, + state->force_mount, np, + include_noatime, errmsg, + sizeof(errmsg), &retry_err); + g_pfs_global_attempts++; + log_debug(" [IMG][BRUTE] stage=B idx=%u retry=(reattach-after-enoent) tuple=(img=%u raw=0x%x sec=%u sec2=%u) opts=(fstype=%s budget=%s mkey=%s sig=%u playgo=%u disc=%u ekpfs=%d noatime=%d) result=%s errno=%d(%s)%s%s", + state->attempt_idx, tuple->image_type, tuple->raw_flags, + tuple->sector_size, tuple->secondary_unit, np->fstype, + np->budgetid ? np->budgetid : "-", + np->mkeymode ? np->mkeymode : "-", np->sigverify, + np->playgo, np->disc, np->include_ekpfs ? 1 : 0, + include_noatime ? 1 : 0, + retry_ok ? "NMOUNT_OK" : "NMOUNT_FAIL", retry_err, + errno_name_short(retry_err), + (!retry_ok && errmsg[0]) ? " msg=" : "", + (!retry_ok && errmsg[0]) ? errmsg : ""); + state->attempt_idx++; + + ok = retry_ok; + nmount_err = retry_err; + + if (!ok && nmount_err == EINVAL && include_noatime) { + int drop_noatime_err = 0; + bool drop_noatime_ok = stage_b_nmount_profile( + state->mount_point, devname, state->mount_read_only, + state->force_mount, np, false, errmsg, sizeof(errmsg), + &drop_noatime_err); + g_pfs_global_attempts++; + log_debug(" [IMG][BRUTE] stage=B idx=%u retry=(drop-noatime-after-enoent) tuple=(img=%u raw=0x%x sec=%u sec2=%u) opts=(fstype=%s budget=%s mkey=%s sig=%u playgo=%u disc=%u ekpfs=%d noatime=0) result=%s errno=%d(%s)%s%s", + state->attempt_idx, tuple->image_type, tuple->raw_flags, + tuple->sector_size, tuple->secondary_unit, np->fstype, + np->budgetid ? np->budgetid : "-", + np->mkeymode ? np->mkeymode : "-", np->sigverify, + np->playgo, np->disc, np->include_ekpfs ? 1 : 0, + drop_noatime_ok ? "NMOUNT_OK" : "NMOUNT_FAIL", + drop_noatime_err, errno_name_short(drop_noatime_err), + (!drop_noatime_ok && errmsg[0]) ? " msg=" : "", + (!drop_noatime_ok && errmsg[0]) ? errmsg : ""); + state->attempt_idx++; + + ok = drop_noatime_ok; + nmount_err = drop_noatime_err; + include_noatime = false; + } + } + } + + if (ok && validate_mounted_image(state->file_path, state->fs_type, + ATTACH_BACKEND_LVD, *unit_id_io, devname, + state->mount_point)) { + fill_mount_profile_from_tuple(winner_out, tuple, np, + include_noatime, + state->mount_read_only); + return true; + } + + if (ok) + (void)unmount_image(state->file_path, *unit_id_io, ATTACH_BACKEND_LVD); + else if (!suppress_nmount_count) + count_nmount_failure(state, nmount_err); + + if (*unit_id_io >= 0) + (void)detach_attached_unit(ATTACH_BACKEND_LVD, *unit_id_io); + *unit_id_io = -1; + if (devname_size > 0) + devname[0] = '\0'; + return false; +} + +static bool pfs_try_attached_tuple_profiles(pfs_bruteforce_state_t *state, + const pfs_attach_tuple_t *tuple, + int *unit_id_io, char *devname, + size_t devname_size, + mount_profile_t *winner_out) { + static const pfs_nmount_profile_t k_pfs_primary_profiles[] = { + {.fstype = "pfs", + .budgetid = DEVPFS_BUDGET_GAME, + .mkeymode = DEVPFS_MKEYMODE_GD, + .sigverify = 0, + .playgo = 0, + .disc = 0, + .include_ekpfs = true, + .supports_noatime = true, + .key_level = 3}, + {.fstype = "pfs", + .budgetid = DEVPFS_BUDGET_GAME, + .mkeymode = DEVPFS_MKEYMODE_SD, + .sigverify = 0, + .playgo = 0, + .disc = 0, + .include_ekpfs = true, + .supports_noatime = true, + .key_level = 3}, + {.fstype = "pfs", + .budgetid = DEVPFS_BUDGET_GAME, + .mkeymode = DEVPFS_MKEYMODE_AC, + .sigverify = 0, + .playgo = 0, + .disc = 0, + .include_ekpfs = true, + .supports_noatime = true, + .key_level = 3}, + {.fstype = "pfs", + .budgetid = DEVPFS_BUDGET_SYSTEM, + .mkeymode = DEVPFS_MKEYMODE_GD, + .sigverify = 0, + .playgo = 0, + .disc = 0, + .include_ekpfs = true, + .supports_noatime = true, + .key_level = 3}, + {.fstype = "pfs", + .budgetid = DEVPFS_BUDGET_GAME, + .mkeymode = DEVPFS_MKEYMODE_GD, + .sigverify = 0, + .playgo = 0, + .disc = 0, + .include_ekpfs = false, + .supports_noatime = true, + .key_level = 3}, + {.fstype = "pfs", + .budgetid = DEVPFS_BUDGET_GAME, + .mkeymode = DEVPFS_MKEYMODE_GD, + .sigverify = 1, + .playgo = 0, + .disc = 0, + .include_ekpfs = true, + .supports_noatime = true, + .key_level = 3}, + {.fstype = "pfs", + .budgetid = DEVPFS_BUDGET_GAME, + .mkeymode = DEVPFS_MKEYMODE_GD, + .sigverify = 0, + .playgo = 1, + .disc = 0, + .include_ekpfs = true, + .supports_noatime = true, + .key_level = 3}, + {.fstype = "pfs", + .budgetid = DEVPFS_BUDGET_GAME, + .mkeymode = DEVPFS_MKEYMODE_GD, + .sigverify = 0, + .playgo = 0, + .disc = 1, + .include_ekpfs = true, + .supports_noatime = true, + .key_level = 3}, + }; + static const char *k_fallback_fstypes[] = {"ppr_pfs", "transaction_pfs"}; + + for (size_t i = 0; i < sizeof(k_pfs_primary_profiles) / sizeof(k_pfs_primary_profiles[0]); i++) { + if (pfs_bruteforce_limits_reached(state)) + return false; + + if (pfs_try_nmount_profile(state, tuple, &k_pfs_primary_profiles[i], + unit_id_io, devname, devname_size, + winner_out)) { + return true; + } + + if (pfs_bruteforce_limits_reached(state)) + return false; + + int attach_err = 0; + if (!stage_a_attach_tuple(state->file_path, state->file_size, tuple, + unit_id_io, devname, devname_size, + &attach_err)) { + count_attach_failure(state, attach_err); + return false; + } + pfs_bruteforce_sleep(state); + } + + for (size_t i = 0; i < sizeof(k_fallback_fstypes) / sizeof(k_fallback_fstypes[0]); i++) { + uint8_t key_level = 0; + while (key_level <= 3) { + if (pfs_bruteforce_limits_reached(state)) + return false; + + pfs_nmount_profile_t np = { + .fstype = k_fallback_fstypes[i], + .budgetid = DEVPFS_BUDGET_GAME, + .mkeymode = DEVPFS_MKEYMODE_GD, + .sigverify = 0, + .playgo = 0, + .disc = 0, + .include_ekpfs = false, + .supports_noatime = true, + .key_level = key_level, + }; + int unit_before_attempt = *unit_id_io; + if (pfs_try_nmount_profile(state, tuple, &np, unit_id_io, devname, + devname_size, winner_out)) { + return true; + } + + if (pfs_bruteforce_limits_reached(state)) + return false; + + int attach_err = 0; + if (!stage_a_attach_tuple(state->file_path, state->file_size, tuple, + unit_id_io, devname, devname_size, + &attach_err)) { + count_attach_failure(state, attach_err); + return false; + } + pfs_bruteforce_sleep(state); + + if (unit_before_attempt < 0) + break; + key_level++; + } + } + + return false; +} + +static bool pfs_try_attach_pass(pfs_bruteforce_state_t *state, + const pfs_attach_pass_t *pass, + mount_profile_t *winner_out, int *unit_id_out, + char *devname_out, size_t devname_size) { + for (size_t i = 0; i < pass->image_type_count; i++) { + for (size_t r = 0; r < pass->raw_flag_count; r++) { + for (size_t s = 0; s < pass->sector_size_count; s++) { + if (pfs_bruteforce_limits_reached(state)) + return false; + + uint32_t sec = pass->sector_sizes[s]; + uint32_t sec2 = sec; + if (pass->force_secondary_65536) { + if (sec == 65536u) + continue; + sec2 = 65536u; + } + + pfs_attach_tuple_t tuple = { + .image_type = pass->image_types[i], + .raw_flags = pass->raw_flags[r], + .normalized_flags = normalize_lvd_raw_flags(pass->raw_flags[r]), + .sector_size = sec, + .secondary_unit = sec2, + }; + + int attach_err = 0; + int temp_unit = -1; + char temp_dev[64]; + bool ok = stage_a_attach_tuple(state->file_path, state->file_size, + &tuple, &temp_unit, temp_dev, + sizeof(temp_dev), &attach_err); + g_pfs_global_attempts++; + log_debug(" [IMG][BRUTE] stage=A pass=%s idx=%u tuple=(img=%u raw=0x%x flags=0x%x sec=%u sec2=%u) result=%s errno=%d", + pass->label, state->attempt_idx, tuple.image_type, + tuple.raw_flags, tuple.normalized_flags, tuple.sector_size, + tuple.secondary_unit, ok ? "ATTACH_OK" : "ATTACH_FAIL", + attach_err); + state->attempt_idx++; + + if (!ok) { + count_attach_failure(state, attach_err); + pfs_bruteforce_sleep(state); + continue; + } + + if (pfs_try_attached_tuple_profiles(state, &tuple, &temp_unit, + temp_dev, sizeof(temp_dev), + winner_out)) { + *unit_id_out = temp_unit; + (void)strlcpy(devname_out, temp_dev, devname_size); + return true; + } + + if (temp_unit >= 0) + (void)detach_attached_unit(ATTACH_BACKEND_LVD, temp_unit); + pfs_bruteforce_sleep(state); + } + } + } + + return false; +} + +static bool mount_profile_equals(const mount_profile_t *a, + const mount_profile_t *b) { + if (!a || !b) + return false; + if (a->image_type != b->image_type || + a->raw_flags != b->raw_flags || + a->normalized_flags != b->normalized_flags || + a->sector_size != b->sector_size || + a->secondary_unit != b->secondary_unit || + a->sigverify != b->sigverify || + a->playgo != b->playgo || + a->disc != b->disc || + a->include_ekpfs != b->include_ekpfs || + a->supports_noatime != b->supports_noatime || + a->mount_read_only != b->mount_read_only) + return false; + + const char *a_fstype = a->fstype ? a->fstype : ""; + const char *b_fstype = b->fstype ? b->fstype : ""; + const char *a_budget = a->budgetid ? a->budgetid : ""; + const char *b_budget = b->budgetid ? b->budgetid : ""; + const char *a_mkey = a->mkeymode ? a->mkeymode : ""; + const char *b_mkey = b->mkeymode ? b->mkeymode : ""; + + return strcmp(a_fstype, b_fstype) == 0 && + strcmp(a_budget, b_budget) == 0 && + strcmp(a_mkey, b_mkey) == 0; +} + +static bool append_unique_profile(mount_profile_t *profiles, int *count, + int max_count, + const mount_profile_t *candidate) { + if (!profiles || !count || !candidate || *count < 0 || max_count <= 0) + return false; + + for (int i = 0; i < *count; i++) { + if (mount_profile_equals(&profiles[i], candidate)) + return false; + } + + if (*count >= max_count) + return false; + + profiles[*count] = *candidate; + (*count)++; + return true; +} + +static bool pfs_mount_with_profile(const char *file_path, + image_fs_type_t fs_type, + off_t file_size, + const char *mount_point, + bool mount_read_only, + bool force_mount, + const mount_profile_t *profile, + int *unit_id_out, + char *devname_out, + size_t devname_size, + uint32_t *mount_ms_out) { + if (!file_path || !mount_point || !profile || !unit_id_out || !devname_out) + return false; + + if (mount_ms_out) + *mount_ms_out = 0; + + pfs_attach_tuple_t tuple = { + .image_type = profile->image_type, + .raw_flags = profile->raw_flags, + .normalized_flags = profile->normalized_flags, + .sector_size = profile->sector_size, + .secondary_unit = profile->secondary_unit, + }; + + int attach_err = 0; + if (!stage_a_attach_tuple(file_path, file_size, &tuple, + unit_id_out, devname_out, + devname_size, &attach_err)) { + return false; + } + + pfs_nmount_profile_t np = { + .fstype = profile->fstype ? profile->fstype : "pfs", + .budgetid = profile->budgetid ? profile->budgetid : DEVPFS_BUDGET_GAME, + .mkeymode = profile->mkeymode ? profile->mkeymode : DEVPFS_MKEYMODE_GD, + .sigverify = profile->sigverify, + .playgo = profile->playgo, + .disc = profile->disc, + .include_ekpfs = profile->include_ekpfs, + .supports_noatime = profile->supports_noatime, + .key_level = 3, + }; + + int nmount_err = 0; + char errmsg[256]; + uint64_t start_us = monotonic_time_us(); + bool used_noatime = np.supports_noatime; + bool mounted = stage_b_nmount_profile(mount_point, devname_out, + mount_read_only, force_mount, + &np, used_noatime, + errmsg, sizeof(errmsg), &nmount_err); + if (!mounted && nmount_err == EINVAL && used_noatime) { + mounted = stage_b_nmount_profile(mount_point, devname_out, + mount_read_only, force_mount, + &np, false, + errmsg, sizeof(errmsg), &nmount_err); + used_noatime = false; + } + + if (mount_ms_out) + *mount_ms_out = (uint32_t)((monotonic_time_us() - start_us) / 1000u); + + if (!mounted || !validate_mounted_image(file_path, fs_type, + ATTACH_BACKEND_LVD, + *unit_id_out, devname_out, + mount_point)) { + (void)unmount_image(file_path, *unit_id_out, ATTACH_BACKEND_LVD); + *unit_id_out = -1; + if (devname_size > 0) + devname_out[0] = '\0'; + return false; + } + + (void)used_noatime; + return true; +} + +static int pfs_collect_working_profiles(const runtime_config_t *cfg, + const char *file_path, + image_fs_type_t fs_type, + off_t file_size, + const char *mount_point, + bool mount_read_only, + bool force_mount, + mount_profile_t *profiles_out, + int max_profiles) { + if (!cfg || !file_path || !mount_point || !profiles_out || max_profiles <= 0) + return 0; + + static const uint16_t k_fast_image_types[] = {0}; + static const uint16_t k_fast_fallback_image_types[] = {5}; + static const uint16_t k_secondary_image_types[] = {2, 3, 4, 6}; + static const uint16_t k_last_resort_image_types[] = {1, 7}; + static const uint16_t k_primary_raw_flags[] = {0x9, 0x8}; + static const uint16_t k_last_resort_raw_flags[] = {0xD, 0xC}; + static const uint16_t k_primary_image_types[] = {0, 5}; + static const uint32_t k_sector_candidates[] = {4096u}; + static const pfs_attach_pass_t k_attach_passes[] = { + {.image_types = k_fast_image_types, + .image_type_count = sizeof(k_fast_image_types) / sizeof(k_fast_image_types[0]), + .raw_flags = k_primary_raw_flags, + .raw_flag_count = 1, + .sector_sizes = k_sector_candidates, + .sector_size_count = sizeof(k_sector_candidates) / sizeof(k_sector_candidates[0]), + .force_secondary_65536 = false, + .label = "fast-img0"}, + {.image_types = k_fast_fallback_image_types, + .image_type_count = sizeof(k_fast_fallback_image_types) / sizeof(k_fast_fallback_image_types[0]), + .raw_flags = k_primary_raw_flags, + .raw_flag_count = 1, + .sector_sizes = k_sector_candidates, + .sector_size_count = sizeof(k_sector_candidates) / sizeof(k_sector_candidates[0]), + .force_secondary_65536 = false, + .label = "fast-img5"}, + {.image_types = k_fast_image_types, + .image_type_count = sizeof(k_fast_image_types) / sizeof(k_fast_image_types[0]), + .raw_flags = &k_primary_raw_flags[1], + .raw_flag_count = 1, + .sector_sizes = k_sector_candidates, + .sector_size_count = sizeof(k_sector_candidates) / sizeof(k_sector_candidates[0]), + .force_secondary_65536 = false, + .label = "expand-img0"}, + {.image_types = k_fast_fallback_image_types, + .image_type_count = sizeof(k_fast_fallback_image_types) / sizeof(k_fast_fallback_image_types[0]), + .raw_flags = &k_primary_raw_flags[1], + .raw_flag_count = 1, + .sector_sizes = k_sector_candidates, + .sector_size_count = sizeof(k_sector_candidates) / sizeof(k_sector_candidates[0]), + .force_secondary_65536 = false, + .label = "expand-img5"}, + {.image_types = k_secondary_image_types, + .image_type_count = sizeof(k_secondary_image_types) / sizeof(k_secondary_image_types[0]), + .raw_flags = k_primary_raw_flags, + .raw_flag_count = sizeof(k_primary_raw_flags) / sizeof(k_primary_raw_flags[0]), + .sector_sizes = k_sector_candidates, + .sector_size_count = sizeof(k_sector_candidates) / sizeof(k_sector_candidates[0]), + .force_secondary_65536 = false, + .label = "secondary-images"}, + {.image_types = k_last_resort_image_types, + .image_type_count = sizeof(k_last_resort_image_types) / sizeof(k_last_resort_image_types[0]), + .raw_flags = k_primary_raw_flags, + .raw_flag_count = sizeof(k_primary_raw_flags) / sizeof(k_primary_raw_flags[0]), + .sector_sizes = k_sector_candidates, + .sector_size_count = sizeof(k_sector_candidates) / sizeof(k_sector_candidates[0]), + .force_secondary_65536 = false, + .label = "last-images"}, + {.image_types = k_primary_image_types, + .image_type_count = sizeof(k_primary_image_types) / sizeof(k_primary_image_types[0]), + .raw_flags = k_last_resort_raw_flags, + .raw_flag_count = sizeof(k_last_resort_raw_flags) / sizeof(k_last_resort_raw_flags[0]), + .sector_sizes = k_sector_candidates, + .sector_size_count = sizeof(k_sector_candidates) / sizeof(k_sector_candidates[0]), + .force_secondary_65536 = false, + .label = "last-raws"}, + }; + + pfs_bruteforce_state_t state = { + .cfg = cfg, + .file_path = file_path, + .fs_type = fs_type, + .file_size = file_size, + .mount_point = mount_point, + .mount_read_only = mount_read_only, + .force_mount = force_mount, + .start_time = time(NULL), + }; + + int found = 0; + for (size_t pass_idx = 0; + pass_idx < sizeof(k_attach_passes) / sizeof(k_attach_passes[0]); + pass_idx++) { + const pfs_attach_pass_t *pass = &k_attach_passes[pass_idx]; + for (size_t i = 0; i < pass->image_type_count; i++) { + for (size_t r = 0; r < pass->raw_flag_count; r++) { + for (size_t s = 0; s < pass->sector_size_count; s++) { + if (pfs_bruteforce_limits_reached(&state)) + return found; + + pfs_attach_tuple_t tuple = { + .image_type = pass->image_types[i], + .raw_flags = pass->raw_flags[r], + .normalized_flags = normalize_lvd_raw_flags(pass->raw_flags[r]), + .sector_size = pass->sector_sizes[s], + .secondary_unit = pass->force_secondary_65536 + ? 65536u + : pass->sector_sizes[s], + }; + + int unit_id = -1; + int attach_err = 0; + char devname[64]; + memset(devname, 0, sizeof(devname)); + + if (!stage_a_attach_tuple(file_path, file_size, &tuple, + &unit_id, devname, sizeof(devname), + &attach_err)) { + count_attach_failure(&state, attach_err); + state.attempt_idx++; + pfs_bruteforce_sleep(&state); + continue; + } + + mount_profile_t winner; + memset(&winner, 0, sizeof(winner)); + if (pfs_try_attached_tuple_profiles(&state, &tuple, + &unit_id, devname, + sizeof(devname), &winner)) { + if (append_unique_profile(profiles_out, &found, max_profiles, + &winner)) { + log_debug(" [IMG][PROBE] working profile #%d found for %s", + found, file_path); + } + (void)unmount_image(file_path, unit_id, ATTACH_BACKEND_LVD); + unit_id = -1; + } else if (unit_id >= 0) { + (void)detach_attached_unit(ATTACH_BACKEND_LVD, unit_id); + } + + pfs_bruteforce_sleep(&state); + } + } + } + } + + return found; +} + // --- Image Attach + nmount Pipeline --- bool mount_image(const char *file_path, image_fs_type_t fs_type) { sm_error_clear(); @@ -799,6 +1822,377 @@ bool mount_image(const char *file_path, image_fs_type_t fs_type) { int unit_id = -1; char devname[64]; memset(devname, 0, sizeof(devname)); + const char *filename_local = get_filename_component(file_path); + + if (fs_type == IMAGE_FS_PFS) { + mount_profile_t cached_profile; + if (get_cached_mount_profile(filename_local, &cached_profile)) { + pfs_attach_tuple_t cached_tuple = { + .image_type = cached_profile.image_type, + .raw_flags = cached_profile.raw_flags, + .normalized_flags = cached_profile.normalized_flags, + .sector_size = cached_profile.sector_size, + .secondary_unit = cached_profile.secondary_unit, + }; + pfs_attach_tuple_t runtime_tuple = { + .image_type = get_lvd_image_type(fs_type), + .raw_flags = get_lvd_attach_raw_flags(fs_type, mount_read_only), + .normalized_flags = + normalize_lvd_raw_flags(get_lvd_attach_raw_flags(fs_type, + mount_read_only)), + .sector_size = get_lvd_sector_size(file_path, fs_type), + .secondary_unit = get_lvd_secondary_unit(file_path, fs_type), + }; + + pfs_attach_tuple_t tuple_candidates[2]; + int tuple_count = 0; + tuple_candidates[tuple_count++] = cached_tuple; + if (cached_tuple.image_type != runtime_tuple.image_type || + cached_tuple.raw_flags != runtime_tuple.raw_flags || + cached_tuple.normalized_flags != runtime_tuple.normalized_flags || + cached_tuple.sector_size != runtime_tuple.sector_size || + cached_tuple.secondary_unit != runtime_tuple.secondary_unit) { + tuple_candidates[tuple_count++] = runtime_tuple; + } + + // Pass 0: use ekpfs as stored in profile. + // Pass 1: retry without ekpfs if pass 0 mounted but root was unreadable + // (wrong zero key causes opendir to fail on encrypted PFS). + for (int tuple_pass = 0; tuple_pass < tuple_count; tuple_pass++) { + const pfs_attach_tuple_t *tuple = &tuple_candidates[tuple_pass]; + int ekpfs_passes = cached_profile.include_ekpfs ? 2 : 1; + for (int ekpfs_pass = 0; ekpfs_pass < ekpfs_passes; ekpfs_pass++) { + int cached_err = 0; + if (!stage_a_attach_tuple(file_path, st.st_size, tuple, &unit_id, + devname, sizeof(devname), &cached_err)) { + log_debug(" [IMG][PFS] cached stage-A failed: %s tuple_pass=%d img=%u raw=0x%x sec=%u sec2=%u err=%d", + filename_local, tuple_pass, tuple->image_type, + tuple->raw_flags, tuple->sector_size, + tuple->secondary_unit, cached_err); + break; + } + + char cached_errmsg[256]; + int nmount_err = 0; + pfs_nmount_profile_t cp = { + .fstype = cached_profile.fstype ? cached_profile.fstype : "pfs", + .budgetid = cached_profile.budgetid ? cached_profile.budgetid : DEVPFS_BUDGET_GAME, + .mkeymode = cached_profile.mkeymode ? cached_profile.mkeymode : DEVPFS_MKEYMODE_GD, + .sigverify = cached_profile.sigverify, + .playgo = cached_profile.playgo, + .disc = cached_profile.disc, + .include_ekpfs = (ekpfs_pass == 0) ? cached_profile.include_ekpfs : false, + .supports_noatime = cached_profile.supports_noatime, + .key_level = 3, + }; + bool cached_ok = stage_b_nmount_profile( + mount_point, devname, mount_read_only, force_mount, &cp, + cp.supports_noatime, cached_errmsg, sizeof(cached_errmsg), + &nmount_err); + bool cached_used_noatime = cp.supports_noatime; + if (!cached_ok && nmount_err == EINVAL && cached_used_noatime) { + cached_ok = stage_b_nmount_profile( + mount_point, devname, mount_read_only, force_mount, &cp, + false, cached_errmsg, sizeof(cached_errmsg), &nmount_err); + cached_used_noatime = false; + } + + log_debug(" [IMG][PFS] cached stage-B tuple_pass=%d pass=%d tuple=(img=%u raw=0x%x sec=%u sec2=%u) ekpfs=%d result=%s errno=%d%s%s", + tuple_pass, ekpfs_pass, tuple->image_type, + tuple->raw_flags, tuple->sector_size, + tuple->secondary_unit, cp.include_ekpfs ? 1 : 0, + cached_ok ? "OK" : "FAIL", nmount_err, + (cached_errmsg[0]) ? " msg=" : "", + (cached_errmsg[0]) ? cached_errmsg : ""); + + if (cached_ok && validate_mounted_image(file_path, fs_type, + ATTACH_BACKEND_LVD, unit_id, + devname, mount_point)) { + bool profile_changed = false; + if (tuple_pass > 0) { + cached_profile.image_type = tuple->image_type; + cached_profile.raw_flags = tuple->raw_flags; + cached_profile.normalized_flags = tuple->normalized_flags; + cached_profile.sector_size = tuple->sector_size; + cached_profile.secondary_unit = tuple->secondary_unit; + profile_changed = true; + } + if (ekpfs_pass > 0 && cached_profile.include_ekpfs) { + cached_profile.include_ekpfs = false; + profile_changed = true; + } + if (cached_profile.supports_noatime != cached_used_noatime) { + cached_profile.supports_noatime = cached_used_noatime; + profile_changed = true; + } + if (profile_changed) + (void)cache_mount_profile(filename_local, &cached_profile); + log_debug(" [IMG][PFS] cached profile mounted: %s (tuple_pass=%d pass=%d ekpfs=%d noatime=%d)", + file_path, tuple_pass, ekpfs_pass, + cp.include_ekpfs ? 1 : 0, + cached_used_noatime ? 1 : 0); + goto mount_success; + } + + // If mount happened (cached_ok) but validate failed, validate already + // called unmount_image internally; calling it again is safe (no-op). + // If mount never happened (cached_ok=false), only detach the LVD unit + // to avoid accidentally cleaning up unrelated stale mounts. + if (cached_ok) { + (void)unmount_image(file_path, unit_id, ATTACH_BACKEND_LVD); + } else if (unit_id >= 0) { + (void)detach_attached_unit(ATTACH_BACKEND_LVD, unit_id); + } + unit_id = -1; + memset(devname, 0, sizeof(devname)); + // Restore mount dir removed by unmount_image for next pass/fallback. + ensure_mount_dirs(mount_point); + } + } + } + } + + // For PFS images with brute-force enabled, use adaptive mount strategy + if (fs_type == IMAGE_FS_PFS && cfg->pfs_bruteforce_enabled) { + time_t cooldown_remaining = 0; + if (is_pfs_cooldown_active(file_path, &cooldown_remaining)) { + errno = EAGAIN; + return false; + } + + time_t now = time(NULL); + uint32_t scan_window_seconds = cfg->scan_interval_us / 1000000u; + if (scan_window_seconds == 0) + scan_window_seconds = 1; + if (g_pfs_global_attempt_window == 0 || + now - g_pfs_global_attempt_window >= (time_t)scan_window_seconds) { + g_pfs_global_attempt_window = now; + g_pfs_global_attempts = 0; + } + + log_debug(" [IMG][BRUTE] start two-stage solver: %s", file_path); + + mount_profile_t probe_profiles[SM_PROBE_MAX_WINNERS]; + memset(probe_profiles, 0, sizeof(probe_profiles)); + int probe_profile_count = bench_load_probe(filename_local, + probe_profiles, + SM_PROBE_MAX_WINNERS); + if (cfg->pfs_probe_enabled && probe_profile_count == 0) { + log_debug(" [IMG][PROBE] collecting working profiles for: %s", file_path); + probe_profile_count = pfs_collect_working_profiles( + cfg, file_path, fs_type, st.st_size, mount_point, + mount_read_only, force_mount, + probe_profiles, SM_PROBE_MAX_WINNERS); + if (probe_profile_count > 0) + bench_save_probe(filename_local, probe_profiles, probe_profile_count); + log_debug(" [IMG][PROBE] completed: %d working profiles", probe_profile_count); + } + + static const uint16_t k_fast_image_types[] = {0}; + static const uint16_t k_fast_fallback_image_types[] = {5}; + static const uint16_t k_secondary_image_types[] = {2, 3, 4, 6}; + static const uint16_t k_last_resort_image_types[] = {1, 7}; + static const uint16_t k_primary_raw_flags[] = {0x9, 0x8}; + static const uint16_t k_last_resort_raw_flags[] = {0xD, 0xC}; + static const uint16_t k_primary_image_types[] = {0, 5}; + static const uint32_t k_sector_candidates[] = {4096u}; + static const pfs_attach_pass_t k_attach_passes[] = { + {.image_types = k_fast_image_types, + .image_type_count = sizeof(k_fast_image_types) / sizeof(k_fast_image_types[0]), + .raw_flags = k_primary_raw_flags, + .raw_flag_count = 1, + .sector_sizes = k_sector_candidates, + .sector_size_count = sizeof(k_sector_candidates) / sizeof(k_sector_candidates[0]), + .force_secondary_65536 = false, + .label = "fast-img0"}, + {.image_types = k_fast_fallback_image_types, + .image_type_count = sizeof(k_fast_fallback_image_types) / sizeof(k_fast_fallback_image_types[0]), + .raw_flags = k_primary_raw_flags, + .raw_flag_count = 1, + .sector_sizes = k_sector_candidates, + .sector_size_count = sizeof(k_sector_candidates) / sizeof(k_sector_candidates[0]), + .force_secondary_65536 = false, + .label = "fast-img5"}, + {.image_types = k_fast_image_types, + .image_type_count = sizeof(k_fast_image_types) / sizeof(k_fast_image_types[0]), + .raw_flags = &k_primary_raw_flags[1], + .raw_flag_count = 1, + .sector_sizes = k_sector_candidates, + .sector_size_count = sizeof(k_sector_candidates) / sizeof(k_sector_candidates[0]), + .force_secondary_65536 = false, + .label = "expand-img0"}, + {.image_types = k_fast_fallback_image_types, + .image_type_count = sizeof(k_fast_fallback_image_types) / sizeof(k_fast_fallback_image_types[0]), + .raw_flags = &k_primary_raw_flags[1], + .raw_flag_count = 1, + .sector_sizes = k_sector_candidates, + .sector_size_count = sizeof(k_sector_candidates) / sizeof(k_sector_candidates[0]), + .force_secondary_65536 = false, + .label = "expand-img5"}, + {.image_types = k_secondary_image_types, + .image_type_count = sizeof(k_secondary_image_types) / sizeof(k_secondary_image_types[0]), + .raw_flags = k_primary_raw_flags, + .raw_flag_count = sizeof(k_primary_raw_flags) / sizeof(k_primary_raw_flags[0]), + .sector_sizes = k_sector_candidates, + .sector_size_count = sizeof(k_sector_candidates) / sizeof(k_sector_candidates[0]), + .force_secondary_65536 = false, + .label = "secondary-images"}, + {.image_types = k_last_resort_image_types, + .image_type_count = sizeof(k_last_resort_image_types) / sizeof(k_last_resort_image_types[0]), + .raw_flags = k_primary_raw_flags, + .raw_flag_count = sizeof(k_primary_raw_flags) / sizeof(k_primary_raw_flags[0]), + .sector_sizes = k_sector_candidates, + .sector_size_count = sizeof(k_sector_candidates) / sizeof(k_sector_candidates[0]), + .force_secondary_65536 = false, + .label = "last-images"}, + {.image_types = k_primary_image_types, + .image_type_count = sizeof(k_primary_image_types) / sizeof(k_primary_image_types[0]), + .raw_flags = k_last_resort_raw_flags, + .raw_flag_count = sizeof(k_last_resort_raw_flags) / sizeof(k_last_resort_raw_flags[0]), + .sector_sizes = k_sector_candidates, + .sector_size_count = sizeof(k_sector_candidates) / sizeof(k_sector_candidates[0]), + .force_secondary_65536 = false, + .label = "last-raws"}, + }; + + bool mounted = false; + mount_profile_t winner; + memset(&winner, 0, sizeof(winner)); + pfs_bruteforce_state_t brute_state = { + .cfg = cfg, + .file_path = file_path, + .fs_type = fs_type, + .file_size = st.st_size, + .mount_point = mount_point, + .mount_read_only = mount_read_only, + .force_mount = force_mount, + .start_time = time(NULL), + }; + + for (size_t pass_idx = 0; + pass_idx < sizeof(k_attach_passes) / sizeof(k_attach_passes[0]) && + !mounted; + pass_idx++) { + mounted = pfs_try_attach_pass(&brute_state, &k_attach_passes[pass_idx], + &winner, &unit_id, devname, + sizeof(devname)); + } + + if (mounted) { + (void)cache_mount_profile(filename_local, &winner); + log_debug(" [IMG][BRUTE] winner selected: img=%u raw=0x%x flags=0x%x sec=%u sec2=%u fstype=%s budget=%s mkey=%s ekpfs=%u noatime=%u", + winner.image_type, winner.raw_flags, winner.normalized_flags, + winner.sector_size, winner.secondary_unit, + winner.fstype ? winner.fstype : "pfs", + winner.budgetid ? winner.budgetid : DEVPFS_BUDGET_GAME, + winner.mkeymode ? winner.mkeymode : DEVPFS_MKEYMODE_SD, + winner.include_ekpfs ? 1u : 0u, + winner.supports_noatime ? 1u : 0u); + + if (cfg->pfs_bench_enabled) { + int profile_count = probe_profile_count; + if (profile_count <= 0) { + probe_profiles[0] = winner; + profile_count = 1; + } + + bench_result_t existing[SM_PROBE_MAX_WINNERS]; + memset(existing, 0, sizeof(existing)); + int next_idx = 0; + bool bench_done = false; + (void)bench_load_results(filename_local, existing, + SM_PROBE_MAX_WINNERS, + &next_idx, &bench_done); + + if (!bench_done && next_idx >= 0 && next_idx < profile_count) { + int target_idx = next_idx; + mount_profile_t target_profile = probe_profiles[target_idx]; + mount_profile_t original_profile = winner; + + uint32_t mount_ms = 0; + bool mount_ok_for_bench = mount_profile_equals(&winner, &target_profile); + if (!mount_ok_for_bench) { + (void)unmount_image(file_path, unit_id, ATTACH_BACKEND_LVD); + unit_id = -1; + memset(devname, 0, sizeof(devname)); + mount_ok_for_bench = pfs_mount_with_profile( + file_path, fs_type, st.st_size, mount_point, + mount_read_only, force_mount, &target_profile, + &unit_id, devname, sizeof(devname), &mount_ms); + if (mount_ok_for_bench) + winner = target_profile; + } + + bench_result_t result; + memset(&result, 0, sizeof(result)); + result.profile = target_profile; + result.mount_ok = mount_ok_for_bench; + result.mount_ms = mount_ms; + if (mount_ok_for_bench) { + (void)bench_run_mounted(mount_point, cfg, &result); + } else { + result.any_failed = true; + result.score_ms = 0; + } + + bench_result_t snapshot[SM_PROBE_MAX_WINNERS]; + memset(snapshot, 0, sizeof(snapshot)); + (void)bench_load_results(filename_local, snapshot, + SM_PROBE_MAX_WINNERS, + NULL, NULL); + snapshot[target_idx] = result; + + bool bench_complete = (target_idx + 1 >= profile_count); + int best_idx = bench_complete ? bench_find_best(snapshot, profile_count) + : -1; + (void)bench_save_result(filename_local, target_idx, &result, + profile_count, bench_complete, best_idx); + + if (bench_complete) { + bench_log_report(filename_local, snapshot, profile_count, best_idx); + if (best_idx >= 0) + (void)cache_mount_profile(filename_local, + &snapshot[best_idx].profile); + } + + mount_profile_t desired_profile = original_profile; + if (bench_complete && best_idx >= 0) + desired_profile = snapshot[best_idx].profile; + + if (!mount_profile_equals(&winner, &desired_profile)) { + (void)unmount_image(file_path, unit_id, ATTACH_BACKEND_LVD); + unit_id = -1; + memset(devname, 0, sizeof(devname)); + uint32_t remount_ms = 0; + if (pfs_mount_with_profile(file_path, fs_type, st.st_size, + mount_point, mount_read_only, + force_mount, &desired_profile, + &unit_id, devname, sizeof(devname), + &remount_ms)) { + (void)remount_ms; + winner = desired_profile; + } + } + } + } + + notify_system_info("PFS mounted:\n%s", file_path); + attach_backend = ATTACH_BACKEND_LVD; + goto mount_success; + } + + set_pfs_cooldown(file_path, cfg->pfs_bruteforce_cooldown_seconds); + log_debug(" [IMG][BRUTE] exhausted summary: attach_e22=%u nmount_e2=%u nmount_e22=%u nmount_e96=%u other=%u attempts=%u", + brute_state.attach_einval_count, + brute_state.nmount_enoent_count, + brute_state.nmount_einval_count, + brute_state.nmount_semantic_count, brute_state.other_fail_count, + brute_state.attempt_idx); + log_debug(" [IMG][BRUTE] all profiles failed, moving to next image"); + return false; + } + + // Standard mount flow for non-PFS or brute-force disabled if (!attach_image_device(file_path, fs_type, mount_read_only, st.st_size, attach_backend, &unit_id, devname, sizeof(devname))) { return false; @@ -808,8 +2202,9 @@ bool mount_image(const char *file_path, image_fs_type_t fs_type) { return false; } - if (!validate_mounted_image(file_path, fs_type, attach_backend, unit_id, - devname, mount_point)) { +mount_success: + if (!validate_mounted_image(file_path, fs_type, attach_backend, unit_id, devname, + mount_point)) { return false; } @@ -1047,6 +2442,11 @@ void maybe_mount_image_file(const char *full_path, const char *display_name, image_fs_type_t fs_type = detect_image_fs_type(display_name); if (fs_type == IMAGE_FS_UNKNOWN) return; + if (fs_type == IMAGE_FS_PFS) { + time_t remaining = 0; + if (is_pfs_cooldown_active(full_path, &remaining)) + return; + } if (!is_source_stable_for_mount(full_path, display_name, "IMG")) { if (unstable_out) *unstable_out = true; @@ -1061,6 +2461,8 @@ void maybe_mount_image_file(const char *full_path, const char *display_name, } int mount_err = errno; + if (mount_err == EAGAIN) + return; if (bump_image_mount_attempts(full_path) == 1 && !sm_error_notified()) { notify_image_mount_failed(full_path, mount_err); } diff --git a/src/sm_mount_cache.c b/src/sm_mount_cache.c new file mode 100644 index 0000000..6f0537b --- /dev/null +++ b/src/sm_mount_cache.c @@ -0,0 +1,433 @@ +#include "sm_mount_cache.h" + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "sm_hash.h" +#include "sm_limits.h" +#include "sm_log.h" +#include "sm_mount_defs.h" +#include "sm_path_utils.h" +#include "sm_paths.h" + +// Autotune file path: /data/shadowmount/autotune.ini +#define AUTOTUNE_INI_PATH "/data/shadowmount/autotune.ini" + +// Line buffer for parsing +#define AUTOTUNE_LINE_SIZE 512 +#define AUTOTUNE_MAX_ENTRIES 256 + +typedef struct { + char filename[MAX_PATH]; + char profile_str[256]; +} autotune_cache_entry_t; + +// In-memory cache of parsed autotune entries +static autotune_cache_entry_t g_autotune_cache[AUTOTUNE_MAX_ENTRIES]; +static int g_autotune_cache_count = 0; +static bool g_autotune_cache_loaded = false; + +// Load autotune.ini into memory +static bool load_autotune_cache(void) { + if (g_autotune_cache_loaded) + return true; + + g_autotune_cache_count = 0; + memset(g_autotune_cache, 0, sizeof(g_autotune_cache)); + + FILE *fp = fopen(AUTOTUNE_INI_PATH, "r"); + if (!fp) { + // File may not exist yet, that's OK + g_autotune_cache_loaded = true; + return true; + } + + char line[AUTOTUNE_LINE_SIZE]; + while (fgets(line, sizeof(line), fp) != NULL && g_autotune_cache_count < AUTOTUNE_MAX_ENTRIES) { + // Trim trailing newline + size_t len = strlen(line); + if (len > 0 && line[len - 1] == '\n') + line[len - 1] = '\0'; + + // Skip empty lines and comments + if (line[0] == '\0' || line[0] == '#' || line[0] == ';') + continue; + + // Look for mount_profile=: + const char *prefix = "mount_profile="; + if (strncmp(line, prefix, strlen(prefix)) != 0) + continue; + + const char *value = line + strlen(prefix); + const char *colon = strchr(value, ':'); + if (!colon) + continue; + + size_t filename_len = (size_t)(colon - value); + if (filename_len == 0 || filename_len >= MAX_PATH) + continue; + + const char *profile_str = colon + 1; + if (!profile_str || profile_str[0] == '\0') + continue; + + // Store in cache + memcpy(g_autotune_cache[g_autotune_cache_count].filename, value, filename_len); + g_autotune_cache[g_autotune_cache_count].filename[filename_len] = '\0'; + (void)strlcpy(g_autotune_cache[g_autotune_cache_count].profile_str, profile_str, + sizeof(g_autotune_cache[g_autotune_cache_count].profile_str)); + g_autotune_cache_count++; + } + + fclose(fp); + g_autotune_cache_loaded = true; + return true; +} + +bool get_cached_mount_profile(const char *image_filename, + mount_profile_t *profile_out) { + if (!image_filename || !profile_out) + return false; + + if (!load_autotune_cache()) + return false; + + // Search cache for matching filename + for (int i = 0; i < g_autotune_cache_count; i++) { + if (strcmp(g_autotune_cache[i].filename, image_filename) == 0) { + if (parse_profile_from_cache(g_autotune_cache[i].profile_str, profile_out)) { + log_debug(" [IMG][CACHE] found cached profile for %s", image_filename); + return true; + } + break; + } + } + + return false; +} + +void format_profile_for_cache(const mount_profile_t *profile, + char *buf, size_t buf_size) { + if (!profile || !buf || buf_size == 0) + return; + + snprintf(buf, buf_size, + "v2:%u:0x%x:0x%x:%u:%u:%s:%s:%s:%u:%u:%u:%u:%u:%d", + profile->image_type, profile->raw_flags, + profile->normalized_flags, profile->sector_size, + profile->secondary_unit, profile->fstype, profile->budgetid, + profile->mkeymode, profile->sigverify, profile->playgo, + profile->disc, profile->include_ekpfs ? 1u : 0u, + profile->supports_noatime ? 1u : 0u, + profile->mount_read_only ? 1 : 0); +} + +bool parse_profile_from_cache(const char *cached_str, + mount_profile_t *profile_out) { + if (!cached_str || !profile_out) + return false; + + char buf[256]; + (void)strlcpy(buf, cached_str, sizeof(buf)); + + memset(profile_out, 0, sizeof(*profile_out)); + + // Parse format: + // v2:image_type:raw_flags:norm_flags:sector_size:sec2:fstype:budgetid:mkeymode:sig:playgo:disc:ekpfs:noatime:ro + // v1:image_type:raw_flags:norm_flags:sector_size:sec2:fstype:budgetid:mkeymode:sig:playgo:disc:ekpfs:ro + // v0:image_type:raw_flags:raw_flags:norm_flags:sector_size:fstype:budgetid:mkeymode:sig:playgo:disc:ro + char *saveptr = NULL; + char *token = NULL; + + // Version + token = strtok_r(buf, ":", &saveptr); + if (!token) + return false; + + bool is_v2 = (strcmp(token, "v2") == 0); + bool is_v1 = (strcmp(token, "v1") == 0); + bool is_v0 = (strcmp(token, "v0") == 0); + if (!is_v2 && !is_v1 && !is_v0) + return false; + + // image_type + token = strtok_r(NULL, ":", &saveptr); + if (!token) + return false; + profile_out->image_type = (uint16_t)strtoul(token, NULL, 10); + + if (is_v2 || is_v1) { + token = strtok_r(NULL, ":", &saveptr); + if (!token) + return false; + profile_out->raw_flags = (uint16_t)strtoul(token, NULL, 0); + + token = strtok_r(NULL, ":", &saveptr); + if (!token) + return false; + profile_out->normalized_flags = (uint16_t)strtoul(token, NULL, 0); + + token = strtok_r(NULL, ":", &saveptr); + if (!token) + return false; + profile_out->sector_size = (uint32_t)strtoul(token, NULL, 10); + + token = strtok_r(NULL, ":", &saveptr); + if (!token) + return false; + profile_out->secondary_unit = (uint32_t)strtoul(token, NULL, 10); + } else { + // raw_flags + token = strtok_r(NULL, ":", &saveptr); + if (!token) + return false; + profile_out->raw_flags = (uint16_t)strtoul(token, NULL, 10); + + // Skip stored raw_flags (redundant) + token = strtok_r(NULL, ":", &saveptr); + if (!token) + return false; + + // normalized_flags + token = strtok_r(NULL, ":", &saveptr); + if (!token) + return false; + profile_out->normalized_flags = (uint16_t)strtoul(token, NULL, 16); + + // sector_size + token = strtok_r(NULL, ":", &saveptr); + if (!token) + return false; + profile_out->sector_size = (uint32_t)strtoul(token, NULL, 10); + profile_out->secondary_unit = profile_out->sector_size; + } + + // fstype + token = strtok_r(NULL, ":", &saveptr); + if (!token) + return false; + // Allocate static string (short-lived in this context) + if (strcmp(token, "pfs") == 0) { + profile_out->fstype = "pfs"; + } else if (strcmp(token, "ppr_pfs") == 0) { + profile_out->fstype = "ppr_pfs"; + } else if (strcmp(token, "transaction_pfs") == 0) { + profile_out->fstype = "transaction_pfs"; + } else { + return false; + } + + // budgetid + token = strtok_r(NULL, ":", &saveptr); + if (!token) + return false; + if (strcmp(token, DEVPFS_BUDGET_GAME) == 0) { + profile_out->budgetid = DEVPFS_BUDGET_GAME; + } else if (strcmp(token, DEVPFS_BUDGET_SYSTEM) == 0) { + profile_out->budgetid = DEVPFS_BUDGET_SYSTEM; + } else { + return false; + } + + // mkeymode + token = strtok_r(NULL, ":", &saveptr); + if (!token) + return false; + if (strcmp(token, DEVPFS_MKEYMODE_SD) == 0) { + profile_out->mkeymode = DEVPFS_MKEYMODE_SD; + } else if (strcmp(token, DEVPFS_MKEYMODE_GD) == 0) { + profile_out->mkeymode = DEVPFS_MKEYMODE_GD; + } else if (strcmp(token, DEVPFS_MKEYMODE_AC) == 0) { + profile_out->mkeymode = DEVPFS_MKEYMODE_AC; + } else { + return false; + } + + // sigverify + token = strtok_r(NULL, ":", &saveptr); + if (!token) + return false; + profile_out->sigverify = (uint8_t)strtoul(token, NULL, 10); + + // playgo + token = strtok_r(NULL, ":", &saveptr); + if (!token) + return false; + profile_out->playgo = (uint8_t)strtoul(token, NULL, 10); + + // disc + token = strtok_r(NULL, ":", &saveptr); + if (!token) + return false; + profile_out->disc = (uint8_t)strtoul(token, NULL, 10); + + if (is_v2 || is_v1) { + token = strtok_r(NULL, ":", &saveptr); + if (!token) + return false; + profile_out->include_ekpfs = ((uint32_t)strtoul(token, NULL, 10) != 0); + + if (is_v2) { + token = strtok_r(NULL, ":", &saveptr); + if (!token) + return false; + profile_out->supports_noatime = + ((uint32_t)strtoul(token, NULL, 10) != 0); + } else { + profile_out->supports_noatime = true; + } + } else { + profile_out->include_ekpfs = true; + profile_out->supports_noatime = true; + } + + // mount_read_only + token = strtok_r(NULL, ":", &saveptr); + if (!token) + return false; + profile_out->mount_read_only = ((uint32_t)strtoul(token, NULL, 10) != 0); + + profile_out->io_version = LVD_ATTACH_IO_VERSION_V0; + profile_out->label = "cached"; + + return true; +} + +static bool upsert_cache_profile_line(const char *image_filename, + const char *profile_str) { + char temp_path[MAX_PATH]; + int written = snprintf(temp_path, sizeof(temp_path), "%s.tmp", AUTOTUNE_INI_PATH); + if (written <= 0 || (size_t)written >= sizeof(temp_path)) + return false; + + FILE *in = fopen(AUTOTUNE_INI_PATH, "r"); + FILE *out = fopen(temp_path, "w"); + if (!out) { + log_debug(" [IMG][CACHE] temp open failed: %s (%s)", temp_path, + strerror(errno)); + if (in) + fclose(in); + return false; + } + + const char *prefix = "mount_profile="; + bool replaced = false; + if (in) { + char line[AUTOTUNE_LINE_SIZE]; + while (fgets(line, sizeof(line), in) != NULL) { + char original[AUTOTUNE_LINE_SIZE]; + (void)strlcpy(original, line, sizeof(original)); + + size_t len = strlen(line); + if (len > 0 && line[len - 1] == '\n') + line[len - 1] = '\0'; + + bool is_target = false; + if (strncmp(line, prefix, strlen(prefix)) == 0) { + const char *value = line + strlen(prefix); + const char *colon = strchr(value, ':'); + if (colon) { + size_t filename_len = (size_t)(colon - value); + if (filename_len == strlen(image_filename) && + strncmp(value, image_filename, filename_len) == 0) { + is_target = true; + } + } + } + + if (is_target) { + if (!replaced) { + if (fprintf(out, "mount_profile=%s:%s\n", image_filename, + profile_str) < 0) { + goto write_failed; + } + replaced = true; + } + } else { + if (fputs(original, out) == EOF) + goto write_failed; + } + } + + fclose(in); + in = NULL; + } + + if (!replaced && + fprintf(out, "mount_profile=%s:%s\n", image_filename, profile_str) < 0) { + goto write_failed; + } + + if (fclose(out) != 0) { + out = NULL; + unlink(temp_path); + return false; + } + out = NULL; + + if (rename(temp_path, AUTOTUNE_INI_PATH) != 0) { + unlink(temp_path); + return false; + } + + return true; + +write_failed: + if (in) + fclose(in); + fclose(out); + unlink(temp_path); + return false; +} + +bool cache_mount_profile(const char *image_filename, + const mount_profile_t *profile) { + if (!image_filename || !profile) + return false; + + if (!load_autotune_cache()) + return false; + + // Check if already cached + int existing_index = -1; + for (int i = 0; i < g_autotune_cache_count; i++) { + if (strcmp(g_autotune_cache[i].filename, image_filename) == 0) { + existing_index = i; + break; + } + } + + // Format profile for caching + char profile_str[256]; + format_profile_for_cache(profile, profile_str, sizeof(profile_str)); + + if (!upsert_cache_profile_line(image_filename, profile_str)) { + mkdir("/data/shadowmount", 0777); + if (!upsert_cache_profile_line(image_filename, profile_str)) { + log_debug(" [IMG][CACHE] failed to upsert %s", AUTOTUNE_INI_PATH); + return false; + } + } + + if (existing_index >= 0) { + (void)strlcpy(g_autotune_cache[existing_index].profile_str, profile_str, + sizeof(g_autotune_cache[existing_index].profile_str)); + } else if (g_autotune_cache_count < AUTOTUNE_MAX_ENTRIES) { + (void)strlcpy(g_autotune_cache[g_autotune_cache_count].filename, + image_filename, + sizeof(g_autotune_cache[g_autotune_cache_count].filename)); + (void)strlcpy(g_autotune_cache[g_autotune_cache_count].profile_str, + profile_str, + sizeof(g_autotune_cache[g_autotune_cache_count].profile_str)); + g_autotune_cache_count++; + } + + log_debug(" [IMG][CACHE] cached profile for %s: %s", image_filename, profile_str); + return true; +} diff --git a/src/sm_mount_profile.c b/src/sm_mount_profile.c new file mode 100644 index 0000000..6df03fd --- /dev/null +++ b/src/sm_mount_profile.c @@ -0,0 +1,117 @@ +#include "sm_mount_profile.h" + +#include +#include + +#include "sm_log.h" +#include "sm_mount_defs.h" + +// Helper to normalize raw flags (mirrors normalize_lvd_raw_flags from sm_image.c) +static uint16_t profile_normalize_lvd_raw_flags(uint16_t raw_flags) { + if ((raw_flags & 0x800Eu) != 0u) { + uint32_t raw = (uint32_t)raw_flags; + uint32_t len = (raw & 0xFFFF8000u) + ((raw & 2u) << 6) + + (8u * (raw & 1u)) + (2u * ((raw >> 2) & 1u)) + + (2u * (raw & 8u)) + 4u; + return (uint16_t)len; + } + return (uint16_t)(8u * ((uint32_t)raw_flags & 1u) + 4u); +} + +// Helper to get image type (mirrors get_lvd_image_type from sm_image.c) +static uint16_t profile_get_lvd_image_type(image_fs_type_t fs_type) { + if (fs_type == IMAGE_FS_UFS) + return LVD_ATTACH_IMAGE_TYPE_UFS_DOWNLOAD_DATA; + if (fs_type == IMAGE_FS_PFS) + return LVD_ATTACH_IMAGE_TYPE_PFS_SAVE_DATA; + if (fs_type == IMAGE_FS_ZFS) + return LVD_ATTACH_IMAGE_TYPE_ZFS; + return LVD_ATTACH_IMAGE_TYPE_SINGLE; +} + +bool mount_profile_validate(const mount_profile_t *profile) { + if (!profile) + return false; + + if (profile->io_version != LVD_ATTACH_IO_VERSION_V0) + return false; + + if (!profile->fstype || profile->fstype[0] == '\0') + return false; + + if (!profile->budgetid || profile->budgetid[0] == '\0') + return false; + + if (!profile->mkeymode || profile->mkeymode[0] == '\0') + return false; + + if (profile->sector_size == 0) + return false; + + if (profile->secondary_unit == 0) + return false; + + return true; +} + +void mount_profile_log(const mount_profile_t *profile, const char *tag) { + if (!profile || !tag) + return; + + char compact_buf[128]; + mount_profile_format_compact(profile, compact_buf, sizeof(compact_buf)); + + log_debug(" [%s] profile: %s", tag, compact_buf); +} + +void mount_profile_format_compact(const mount_profile_t *profile, + char *buf, size_t buf_size) { + if (!profile || !buf || buf_size == 0) + return; + + snprintf(buf, buf_size, + "img=%u raw=0x%x flags=0x%x sec=%u sec2=%u fstype=%s budget=%s " + "mkeymode=%s sigv=%u playgo=%u disc=%u ekpfs=%u noatime=%u ro=%d", + profile->image_type, profile->raw_flags, profile->normalized_flags, + profile->sector_size, profile->secondary_unit, profile->fstype, + profile->budgetid, profile->mkeymode, profile->sigverify, + profile->playgo, profile->disc, profile->include_ekpfs ? 1u : 0u, + profile->supports_noatime ? 1u : 0u, + profile->mount_read_only ? 1 : 0); +} + +mount_profile_t mount_profile_create_default(image_fs_type_t fs_type, + bool mount_read_only) { + mount_profile_t profile; + memset(&profile, 0, sizeof(profile)); + + profile.io_version = LVD_ATTACH_IO_VERSION_V0; + profile.image_type = profile_get_lvd_image_type(fs_type); + profile.sector_size = 4096; + profile.secondary_unit = 4096; + profile.fstype = "pfs"; + profile.budgetid = DEVPFS_BUDGET_GAME; + profile.mkeymode = DEVPFS_MKEYMODE_SD; + profile.sigverify = (PFS_MOUNT_SIGVERIFY != 0) ? 1u : 0u; + profile.playgo = (PFS_MOUNT_PLAYGO != 0) ? 1u : 0u; + profile.disc = (PFS_MOUNT_DISC != 0) ? 1u : 0u; + profile.include_ekpfs = (fs_type == IMAGE_FS_PFS); + profile.supports_noatime = true; + profile.mount_read_only = mount_read_only; + profile.label = "default"; + + // Compute raw_flags and normalized_flags based on mount_read_only + // This mirrors get_lvd_attach_raw_flags logic + if (fs_type == IMAGE_FS_UFS) { + profile.raw_flags = mount_read_only ? LVD_ATTACH_RAW_FLAGS_DD_RO + : LVD_ATTACH_RAW_FLAGS_DD_RW; + } else { + profile.raw_flags = mount_read_only ? LVD_ATTACH_RAW_FLAGS_SINGLE_RO + : LVD_ATTACH_RAW_FLAGS_SINGLE_RW; + } + + // Normalize flags + profile.normalized_flags = profile_normalize_lvd_raw_flags(profile.raw_flags); + + return profile; +} diff --git a/zfs_kernel_support.md b/zfs_kernel_support.md new file mode 100644 index 0000000..5175647 --- /dev/null +++ b/zfs_kernel_support.md @@ -0,0 +1,273 @@ +# ZFS Kernel Support Research and Integration Plan + +Date: 2026-03-29 + +## Goal + +Enable reliable automatic availability of ZFS support for image mounting by loading a PS5 kernel module when needed, with minimal user action and safe fallback behavior. + +This plan assumes development/research on hardware you own and control. + +## What Was Indexed + +### Pages + +- PS5 syscall table: + - Relevant entries confirmed: + - `sys_kldload` at syscall id `0x130` + - `sys_kldunload` at syscall id `0x131` + - `sys_kldfind` at syscall id `0x132` + - `sys_kldsym` at syscall id `0x151` + - `sys_nmount` at syscall id `0x17a` + +### Related repositories cloned under `related_projects/` + +- +- +- + +### Key local references from indexed repos + +- `related_projects/PS5_kldload/README.md` + - Listener model on port `9022` + - Firmware support statements and notes +- `related_projects/PS5_kldload/src/main.c` + - Receives module payload bytes and creates kernel thread +- `related_projects/PS5_kldload/src/server.c` + - TCP server loop and payload callback flow +- `related_projects/ps5-kld-sdk/include/ps5kld/kernel.h` + - `kproc_args` interface and module entrypoint assumptions +- `related_projects/ps5-kld-sdk/src/kernel.c` + - Firmware-offset initialization strategy +- `related_projects/etaHEN/README.md` + - Existing auto-start concepts and payload ecosystem +- `related_projects/etaHEN/Source Code/daemon/source/main.cpp` + - Daemon startup lifecycle and config-driven behavior +- `related_projects/etaHEN/Source Code/daemon/include/globalconf.hpp` + - Runtime config fields and defaults + +### ShadowMount++ integration points + +- `src/sm_image.c` + - ZFS image detection and mount path (`.ffzfs`, `fstype=zfs`, `nmount` flow) + - Best place to trigger "ensure ZFS module loaded" just before ZFS `nmount` +- `src/sm_config_mount.c` + - Runtime config defaults and parsing (ideal place for autoload settings) +- `src/main.c` + - Initialization and daemon lifecycle + +## Findings Summary + +1. Your codebase already has image-type plumbing for ZFS (`IMAGE_FS_ZFS`, `.ffzfs`, zfs nmount iov). +2. The current missing piece is robust module availability orchestration before ZFS mount attempts. +3. Existing ecosystem projects provide a practical transport and runtime model for module loading, but compatibility is firmware-sensitive. +4. A safe design should not assume one loader path. It should support multiple strategies with runtime fallback. + +## Constraints and Risks + +- Firmware variance is the primary risk: offsets and behavior can change by version. +- "Module loaded" signal quality matters: false positives can cause repeated mount failures. +- Startup race conditions can occur if mount attempts happen before loader/service readiness. +- Repeated load attempts can destabilize runtime if throttling/backoff is absent. +- ZFS module ABI must match the target kernel expectations. + +## Proposed Architecture: ZFS Module Manager Layer + +Add a small internal component in ShadowMount++: + +- New component suggestion: `src/sm_zfs_module.c` with header `include/sm_zfs_module.h` +- Responsibility: + - Determine whether ZFS is already available + - Perform one-time load attempt if needed + - Cache state, backoff on failures, expose status + +Suggested API: + +- `bool sm_zfs_module_ensure_loaded(char *err, size_t err_sz);` +- `bool sm_zfs_module_is_ready(void);` +- `void sm_zfs_module_reset_state(void);` + +## Solution Options + +## Solution A (Recommended): Loader-Bridge strategy (network loader endpoint) + +Use a running loader service endpoint as the module injection transport (for example, a loader daemon pattern comparable to `PS5_kldload`) and make ShadowMount++ trigger it only when first ZFS mount is requested. + +### Why this is recommended + +- Decouples ShadowMount++ from low-level kernel primitive maintenance +- Keeps your project focused on mount orchestration and policy +- Easier to support multiple environments by configuration + +### Integration design + +1. Add config keys in `config.ini.example` and parser in `src/sm_config_mount.c`: + - `zfs_autoload = 0|1` + - `zfs_loader_mode = tcp|none` + - `zfs_loader_host = 127.0.0.1` + - `zfs_loader_port = 9022` + - `zfs_module_path = /data/shadowmount/zfs_kmod.bin` + - `zfs_autoload_cooldown_seconds = 30` +2. In `mount_image()` inside `src/sm_image.c`: + - If fs type is ZFS and `zfs_autoload=1`, call `sm_zfs_module_ensure_loaded(...)` before `perform_image_nmount(...)`. +3. In `sm_zfs_module_ensure_loaded(...)`: + - Fast path: if known-ready, return true. + - Probe path: perform a light readiness check (implementation-specific, with timeout). + - If not ready, attempt one load via configured loader bridge. + - Re-probe readiness; on success set cache-ready. + - On failure set cooldown window and return detailed error. +4. Add structured logs and user notifications on: + - attempt started + - success + - failure with cooldown active + +### Operational behavior + +- First `.ffzfs` mount triggers load attempt. +- Successful load is cached; subsequent mounts do not reload. +- Failures are throttled to avoid repeated aggressive retries. + +## Solution B: etaHEN-daemon plugin/service orchestration + +Implement ZFS module load orchestration in etaHEN side (plugin/daemon), and let ShadowMount++ only request/check "ZFS ready" status. + +### Pros (Solution B) + +- Better centralization if your stack already depends on etaHEN services +- Can integrate with existing startup and toolbox/autostart workflows + +### Cons (Solution B) + +- Adds external runtime dependency to ShadowMount++ +- Version and deployment coupling to etaHEN ecosystem + +### When to pick it + +- You already deploy etaHEN on all targets and want single control plane for multiple payloads/modules. + +## Solution C: Direct syscall-oriented loader path + +Call kernel module load path through direct syscall wrappers in your own code path. + +### Pros (Solution C) + +- Fewer moving parts at runtime +- Potentially lower latency + +### Cons (Solution C) + +- Highest maintenance burden across firmware versions +- Strongly coupled to kernel ABI/offset details +- Riskier to keep stable over updates + +### Recommendation + +- Keep this as an experimental fallback path only, not primary. + +## Recommended Final Strategy + +Use Solution A as primary and optionally support Solution B in parallel for users with etaHEN-centric setups. + +Concretely: + +- Primary mode: `zfs_loader_mode=tcp` +- Optional mode: `zfs_loader_mode=etahen` (future extension) +- Disable mode: `zfs_loader_mode=none` for manual workflows + +## Implementation Plan (Phased) + +## Phase 1: Foundation and Config + +1. Add runtime config fields in `include/sm_types.h`: + - autoload enable flag + - loader mode enum/string + - host/port/path/cooldown values +2. Parse and log them in `src/sm_config_mount.c`. +3. Document them in `config.ini.example` and `README.md`. + +Exit criteria: + +- Config loads with sane defaults and appears in debug logs. + +## Phase 2: ZFS Module Manager + +1. Add `include/sm_zfs_module.h` and `src/sm_zfs_module.c`. +2. Implement internal state machine: + - `UNKNOWN` -> `READY` + - `UNKNOWN` -> `FAILED_COOLDOWN` + - `FAILED_COOLDOWN` -> retry after deadline +3. Implement short timeouts and non-blocking-safe behavior where possible. +4. Add detailed error codes/messages. + +Exit criteria: + +- Standalone manager unit behavior validated by logs and synthetic tests. + +## Phase 3: Mount Pipeline Hook + +1. In `src/sm_image.c`, before ZFS `nmount`, call ensure-loaded. +2. If ensure-loaded fails: + - fail mount with explicit actionable error + - notify once per cooldown period +3. Keep non-ZFS image paths untouched. + +Exit criteria: + +- `.ffzfs` mount attempts trigger exactly one load flow during cooldown window. +- UFS/exFAT/PFS behavior unchanged. + +## Phase 4: Validation Matrix + +1. Test combinations: + - Loader available/unavailable + - Module file present/missing + - ZFS already loaded/not loaded + - Repeated mount attempts under failure +2. Firmware matrix at minimum across your active target versions. +3. Confirm no regressions for existing image types. + +Exit criteria: + +- Known failure modes are deterministic, throttled, and clearly logged. + +## Phase 5: Hardening + +1. Add retry backoff policy and upper bound. +2. Add optional health check interval to refresh readiness cache. +3. Add kill-switch config: `zfs_autoload=0` immediate disable. + +Exit criteria: + +- Runtime stability over long sessions and repeated scan/mount cycles. + +## Suggested Config Defaults + +- `zfs_autoload = 1` +- `zfs_loader_mode = tcp` +- `zfs_loader_host = 127.0.0.1` +- `zfs_loader_port = 9022` +- `zfs_module_path = /data/shadowmount/zfs_kmod.bin` +- `zfs_autoload_cooldown_seconds = 30` + +## Observability Requirements + +Add dedicated log tags: + +- `[ZFSMOD] ready=true source=probe` +- `[ZFSMOD] autoload attempt mode=tcp host=... port=...` +- `[ZFSMOD] autoload failed err=... cooldown=...` +- `[ZFSMOD] autoload success elapsed_ms=...` + +This is important to distinguish module-availability failures from `nmount` argument/FS failures. + +## Minimal Test Cases + +1. First mount with module absent and loader reachable -> autoload success, mount success. +2. First mount with module absent and loader unreachable -> autoload fail, throttled retries. +3. First mount with module already present -> no load attempt, mount proceeds. +4. Non-ZFS mounts -> no calls to module manager. +5. Process restart with persistent module state -> fast path readiness check succeeds. + +## Decision + +Proceed with Solution A now, keep Solution B as optional secondary integration, and leave Solution C as experimental fallback only.