A Hugo-based personal blog and book tracking website.
The scripts/update-books.sh script uses the cover CLI to fetch book data from Hardcover.
# Install cover CLI (requires Go)
git clone https://github.com/jackreid/cover.git
cd cover
go build -o cover
# Move binary to PATH or use ./cover directly- Create a Hardcover account at hardcover.app
- Get your API key from account settings
- Set the environment variable:
export HARDCOVER_API_KEY="your-api-key-here"
# Update all book data
./scripts/update-books.sh# Start development server
hugo server
# Build for production
hugo buildCreates a new note post. Prompts for note text and generates a slug automatically.
Dependencies: None
Usage:
./scripts/new-note.shCreates a new photo post from an image file. Extracts creation date from EXIF data and prompts for metadata (slug, title, location, tags, alt text). Automatically resizes and optimizes the image.
Dependencies:
exiftool(for EXIF data extraction)imagemagick(for image processing viamogrify)
Usage:
./scripts/new-photo.sh path/to/photo.jpgCreates a new blog post. Prompts for slug and title, then opens the file in vim for editing.
Dependencies: vim (or your default editor)
Usage:
./scripts/new-post.shInteractive script to create medialog posts for books or movies. Uses fzf to browse and select from your read books and watched movies. Shows checkmarks for items that already have posts. Automatically handles duplicate slugs by appending numbers.
Dependencies:
jq(for JSON parsing)fzf(for interactive selection)vim(for editing the created post)
Usage:
./scripts/new-media.shUpdates book data from Hardcover using the cover CLI. Fetches books in three categories: to-read, reading, and read.
Dependencies:
coverCLI (see Dependencies section above)HARDCOVER_API_KEYenvironment variable
Usage:
./scripts/update-books.shUpdates film data from Letterboxd export. Supports two modes:
- Automatic mode: Downloads the export automatically using cookies
- Manual mode: Processes a pre-downloaded and extracted export directory
Dependencies:
sqlite3(for processing CSV data)unzip(for automatic mode)curl(for automatic mode)
Setup for automatic mode:
- Export your Letterboxd cookies using a browser extension (e.g., "cookies.txt" or "Get cookies.txt LOCALLY")
- Save cookies to
./creds/letterboxd-cookies.txtor setLETTERBOXD_COOKIE_FILEenvironment variable - Or set
LETTERBOXD_COOKIESenvironment variable with cookie string
Usage:
# Automatic mode (requires cookies)
./scripts/update-films.sh
# Manual mode (if you've already downloaded and extracted the export)
./scripts/update-films.sh /path/to/extracted/export/Precomputes a list of eligible photos for random selection in the footer. This reduces build-time computation by filtering photos before Hugo builds, which is especially important for builds running on resource-constrained servers.
The script scans all photos in content/photo/, filters out photos with tags listed in data/content_config.json's exclude_tags, and writes eligible photo paths to data/random_photos.json.
Dependencies:
jq(for JSON parsing and generation)
Usage:
# Update the random photo pool (run before committing changes)
./scripts/update-random-photos.shWhen to run:
- After adding new photos
- After modifying photo tags
- After updating
exclude_tagsindata/content_config.json - Before pushing to production (to ensure fast builds)
The footer template uses this precomputed list to select a random photo, avoiding the need to scan and filter all photos during each Hugo build.
The site uses custom Hugo archetypes for different content types:
- highlight - Article highlights and commentary
- note - Quick personal notes and thoughts
- photo - Photo posts with metadata
- post - Standard blog posts
- Theme:
reallylol(custom theme) - Language: English (GB)
- Syntax highlighting: Dracula theme
- Pagination: 10 items per page
- Build future posts: enabled
- Update
/uses