Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -8,3 +8,4 @@ refs/*
docs/*
.DS_Store
/zfs_implementation*
/related_projects
36 changes: 36 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,6 +75,42 @@ Supported keys (all optional):
- `md_ufs_sector_size=<value>` (default: `512`)
- `md_zfs_sector_size=<value>` (default: `512`)

## Adaptive Mount Strategy (Brute-Force Parameter Discovery)

For `.ffpfs` (PFS) images, ShadowMountPlus includes an optional adaptive mount strategy that automatically discovers the correct mount parameters when the standard defaults don't work. This is useful for PFS dumps with non-standard metadata or sector layouts.

**How it works:**
1. When mounting a PFS image, if the standard mount fails, the system tries a bounded set of parameter combinations.
2. Each combination (image type, raw flags, sector size, filesystem variant) is attempted in priority order.
3. After each failed attempt, the system waits 3 seconds and cleans up before trying the next combination.
4. When a successful mount is found, the winning parameters are cached and reused for future mounts of the same image.
5. If all combinations fail, the image is marked as failed and the scan moves to the next image.

**Configuration keys (all optional, all PFS-specific):**
- `pfs_bruteforce_enabled=1|0` (enable/disable; default: `1`)
- `pfs_bruteforce_sleep_ms=<milliseconds>` (delay between failed attempts; default: `3000`)
- `pfs_bruteforce_max_attempts=<count>` (max attempts per image; default: `20`)
- `pfs_bruteforce_max_seconds_per_image=<seconds>` (max time per image; default: `60`)
- `pfs_bruteforce_cooldown_seconds=<seconds>` (skip heavy brute-force on recently exhausted images; default: `300`)
- `pfs_bruteforce_max_global_attempts=<count>` (global attempt cap per scan window; default: `200`)
- `pfs_bruteforce_sector_sizes=<csv>` (parser-supported override key)
- `pfs_bruteforce_image_types=<csv>` (parser-supported override key)
- `pfs_bruteforce_raw_flags=<csv>` (parser-supported override key)
- `pfs_bruteforce_fstypes=<csv>` (parser-supported override key)

**Cache storage:**
- Successful profiles are automatically cached in `/data/shadowmount/autotune.ini` with the line format:
```ini
mount_profile=<image_filename>:<profile_data>
```
- The cache is consulted first on every mount, significantly accelerating subsequent mounts of the same image.

**Example workflow:**
1. First mount of `MyGame.ffpfs` fails with standard parameters.
2. Brute-force discovers that image_type=5, raw_flags=0x8, sector_size=4096, fstype=pfs works.
3. Profile is cached in autotune.ini.
4. Next mount of `MyGame.ffpfs` tries the cached profile first and succeeds immediately.

Per-image mode override behavior:
- Match is done by image file name (without path).
- File names with spaces are supported.
Expand Down
61 changes: 61 additions & 0 deletions build.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
#!/usr/bin/env bash
set -euo pipefail

# One-command local build wrapper for macOS/Linux hosts.
# Runs the same dependency flow as CI inside Ubuntu.

IMAGE="ubuntu:24.04"
WORKDIR="/work"
REPO_NAME="pacbrew-repo"

if ! command -v docker >/dev/null 2>&1; then
echo "[build.sh] docker is required but not found." >&2
echo "Install Docker Desktop (or Colima + docker CLI) and retry." >&2
exit 1
fi

SCRIPT='set -euo pipefail
export DEBIAN_FRONTEND=noninteractive

apt update
apt install -y \
autoconf \
automake \
build-essential \
clang-18 \
curl \
git \
libarchive-tools \
libtool \
lld-18 \
makepkg \
meson \
pacman-package-manager \
pkg-config \
xxd \
zip

if [[ ! -d "'"${WORKDIR}"'"/"'"${REPO_NAME}"'" ]]; then
git clone https://github.com/EchoStretch/pacbrew-repo "'"${WORKDIR}"'"/"'"${REPO_NAME}"'"
fi

cd "'"${WORKDIR}"'"/"'"${REPO_NAME}"'"/sdk
makepkg -c -f
pacman --noconfirm -U ./ps5-payload-*.pkg.tar.gz

cd "'"${WORKDIR}"'"/"'"${REPO_NAME}"'"/sqlite
makepkg -c -f
pacman --noconfirm -U ./ps5-payload-*.pkg.tar.gz

cd "'"${WORKDIR}"'"
make clean all

echo "[build.sh] build complete"
'

echo "[build.sh] starting containerized build..."
docker run --rm -t \
-v "$PWD:${WORKDIR}" \
-w "${WORKDIR}" \
"${IMAGE}" \
bash -lc "${SCRIPT}"
58 changes: 58 additions & 0 deletions config.ini.example
Original file line number Diff line number Diff line change
Expand Up @@ -97,3 +97,61 @@ md_zfs_sector_size=512
# "/mnt/shadowmnt" is always added automatically.
# scanpath=/data/homebrew
# scanpath=/data/etaHEN/games

# ===== Adaptive Mount Strategy (Brute-Force) for PFS Discovery =====
# These settings control the PFS parameter discovery system.
# When a PFS image mount fails or has unknown structure, the system
# automatically tries different parameter combinations to find what works.

# Enable adaptive mount strategy for PFS images:
# 1/true/yes/on -> enable brute-force parameter discovery (recommended for PFS)
# 0/false/no/off -> use fixed defaults only
pfs_bruteforce_enabled=1

# Sleep duration between failed mount attempts (milliseconds):
pfs_bruteforce_sleep_ms=200

# Maximum attempts per image before giving up:
pfs_bruteforce_max_attempts=20

# Maximum time per image (seconds) before moving to next:
pfs_bruteforce_max_seconds_per_image=60

# Cooldown after full exhaustion (seconds). During cooldown, heavy brute-force
# is skipped for that image so scan can continue with others.
pfs_bruteforce_cooldown_seconds=300

# Global cap for brute-force attempts per scan window.
pfs_bruteforce_max_global_attempts=200

# Optional list overrides (currently advisory; parser accepts these keys):
pfs_bruteforce_sector_sizes=4096
pfs_bruteforce_image_types=0,5,2,3,4,6,1,7
pfs_bruteforce_raw_flags=0x9,0x8,0xD,0xC
pfs_bruteforce_fstypes=pfs,ppr_pfs,transaction_pfs

# Tested profiles are cached in /data/shadowmount/autotune.ini
# and reused automatically on future mounts of the same image.
# The cache section line format is:
# mount_profile=<image_filename>:<profile_data>

# Optional probe mode: discover and store all working PFS profiles for each
# image into a dedicated file:
# /data/shadowmount/pfs_probe_<image_filename>.ini
# Disabled by default because it performs extra mount/unmount attempts.
pfs_probe_enabled=0

# Optional benchmark mode: after mounting PFS via brute-force/profile cache,
# benchmark one discovered profile per run and persist progress in:
# /data/shadowmount/benchmarking.ini
# This allows resuming after crash/reboot from the last tested profile.
pfs_bench_enabled=0

# Benchmark file read size target in bytes (minimum total bytes read per test
# file, chunked internally in 64 KiB reads).
pfs_bench_min_read_bytes=65536

# Benchmark slow-read threshold in milliseconds. Reads or read gaps over this
# threshold are counted as delays in the report.
pfs_bench_delay_threshold_ms=500

215 changes: 215 additions & 0 deletions create_gp4.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,215 @@
#!/usr/bin/env python3
"""
create_gp4.py
Python port of the create-gp4 utility.
"""

import argparse
import os
import sys
from datetime import datetime
from pathlib import Path
from xml.etree.ElementTree import Element, SubElement, tostring


def error_exit(msg: str, *params) -> None:
print(msg % params if params else msg, end="")
sys.exit(1)


class DirNode:
def __init__(self, name: str):
self.name = name
self.children: list["DirNode"] = []


def contains_substr(items: list[str], needle: str) -> bool:
# Mirrors Go behavior: strings.Contains(a, e)
return any(needle in a for a in items)


def get_subdir(node: DirNode, name: str) -> DirNode | None:
if node.name == name:
return node
for child in node.children:
found = get_subdir(child, name)
if found is not None:
return found
return None


def get_root_dir(root: list[DirNode], name: str) -> DirNode | None:
for d in root:
if d.name == name:
return d
return None


def indent_xml_with_tabs(xml_str: str, base_prefix: str = "\t", indent: str = "\t") -> str:
# Keep pretty output style close to Go's xml.MarshalIndent(prefix="\t", indent="\t").
import xml.dom.minidom as minidom

dom = minidom.parseString(xml_str.encode("utf-8"))
pretty = dom.toprettyxml(indent=indent)

lines = [ln for ln in pretty.splitlines() if ln.strip()]
if lines and lines[0].startswith("<?xml"):
lines = lines[1:]

return "\n".join(base_prefix + ln for ln in lines)


def build_rootdir_tag(files: list[str]) -> str:
paths: list[str] = []
paths_clean: list[str] = []
root_dirs: list[DirNode] = []

# Keep only directory paths (remove filenames).
for f in files:
if f and "/" in f:
paths.append(Path(f).parent.as_posix())

# Sort by descending path length.
paths.sort(key=len, reverse=True)

# Remove duplicate/sub-paths using Go's substring behavior.
for p in paths:
if not contains_substr(paths_clean, p):
paths_clean.append(p)

# Build tree.
for p in paths_clean:
split = p.split("/")
if not split or not split[0]:
continue

dir_ptr = get_root_dir(root_dirs, split[0])
if dir_ptr is None:
d = DirNode(split[0])
dir_ptr = d
for part in split[1:]:
child = DirNode(part)
dir_ptr.children.append(child)
dir_ptr = child
root_dirs.append(d)
else:
for part in split[1:]:
d = get_subdir(dir_ptr, part)
if d is not None:
dir_ptr = d
continue
child = DirNode(part)
dir_ptr.children.append(child)
dir_ptr = child

# Convert to XML.
root_elem = Element("rootdir")

def append_dir(parent_elem: Element, node: DirNode) -> None:
d = SubElement(parent_elem, "dir", {"targ_name": node.name})
for c in node.children:
append_dir(d, c)

for d in root_dirs:
append_dir(root_elem, d)

xml_raw = tostring(root_elem, encoding="unicode")
return indent_xml_with_tabs(xml_raw, base_prefix="\t", indent="\t")


def get_file_list(files_path: str) -> list[str]:
files: list[str] = []
root = Path(files_path).resolve()

if not root.exists() or not root.is_dir():
error_exit("Path does not exist or is not a directory: %s\n", files_path)

# Recursively include all files.
for current_root, dirs, filenames in os.walk(root):
dirs.sort()
filenames.sort()
current = Path(current_root)
for fn in filenames:
full = current / fn
rel = full.relative_to(root).as_posix()
files.append(rel)

return files


def parse_files_to_tags(files: list[str]) -> list[str]:
file_tags: list[str] = []
for file in files:
if file:
f = Path(file).as_posix()
file_tags.append(f'\t\t<file targ_path="{f}" orig_path="{f}" />')
return file_tags


def create_gp4(path: str, content_id: str, files: str, files_path: str) -> None:
if files:
file_list = files.split(" ")
else:
file_list = get_file_list(files_path)

file_tag_list = parse_files_to_tags(file_list)
root_dir = build_rootdir_tag(file_list)
file_tags = "\n".join(file_tag_list)
current_time = datetime.now().strftime("%Y-%m-%d %H:%M:%S")

gp4_contents = (
"<?xml version=\"1.0\"?>\n"
"<psproject xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" fmt=\"gp4\" version=\"1000\">\n"
"\t<volume>\n"
"\t\t<volume_type>pkg_ps4_app</volume_type>\n"
"\t\t<volume_id>PS4VOLUME</volume_id>\n"
f"\t\t<volume_ts>{current_time}</volume_ts>\n"
f"\t\t<package content_id=\"{content_id}\" passcode=\"00000000000000000000000000000000\"\n"
"\t\t\tstorage_type=\"digital50\" app_type=\"full\" />\n"
"\t\t<chunk_info chunk_count=\"1\" scenario_count=\"1\">\n"
"\t\t\t<chunks>\n"
"\t\t\t\t<chunk id=\"0\" layer_no=\"0\" label=\"Chunk #0\" />\n"
"\t\t\t</chunks>\n"
"\t\t\t<scenarios default_id=\"0\">\n"
"\t\t\t\t<scenario id=\"0\" type=\"sp\" initial_chunk_count=\"1\" label=\"Scenario #0\">0</scenario>\n"
"\t\t\t</scenarios>\n"
"\t\t</chunk_info>\n"
"\t</volume>\n"
"\t<files img_no=\"0\">\n"
f"{file_tags}"
"\n\t</files>\n"
f"{root_dir}\n"
"</psproject>\n"
)

with open(path, "w", encoding="utf-8", newline="\n") as out:
out.write(gp4_contents)


def main() -> None:
parser = argparse.ArgumentParser()

# Support both original single-dash long flags and GNU-style double-dash flags.
parser.add_argument("-out", "--out", default="homebrew.gp4", help="output gp4 to write to")
parser.add_argument("-content-id", "--content-id", default="", help="content ID of the package")
parser.add_argument("-files", "--files", default="", help="list of files to pack into the package")
parser.add_argument("-path", "--path", default="", help="path to files to pack into the package")

args = parser.parse_args()

if not args.content_id:
error_exit("Content ID not specified, try -content-id=[content ID]\n")

if not args.files and not args.path:
error_exit(
"Content files or path not specified, try -files=\"[files, separated by spaces]\" or -path=\"[path/to/files]\"\n"
)

try:
create_gp4(args.out, args.content_id, args.files, args.path)
except Exception as exc:
error_exit("Error writing GP4: %s\n", str(exc))


if __name__ == "__main__":
main()
Loading
Loading