A hiking trail explorer for Israel with an interactive map and filtering system.
Hiker is a web-based application that aggregates hiking trails from multiple sources and presents them on an interactive map. Users can filter trails by difficulty, length, and tags to find their perfect hike.
🖼️ Frontend - Vibe-coded vanilla JavaScript + HTML/CSS
- Interactive Leaflet map
- Hebrew-language interface
- Real-time filtering by distance and tags
- Responsive design with collapsible sidebar
- Trail details popup
🐍 Backend - Hand-written Python scrapers
- Parallel scraping from multiple hiking data sources (Parks.org.il, Tiuli, kkl
- Data aggregation and normalization
- Exports to JSON for use by the frontend
- 🗺️ Interactive Map - Browse trails on a Leaflet map with markers
- 🔍 Smart Filtering - Filter by trail length and tags (e.g., dog-friendly, water, scenic)
- 📊 Sorting - Sort trails by name, length, difficulty, or geography
- 📱 Mobile Responsive - Works on desktop and mobile devices
- 🌐 No Backend Required - Static site deployed via GitHub Pages (data served as JSON)
cd backend
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
pip install -r requirements.txt
python main.pyThis generates hikes.json with the latest trail data.
The frontend is a static site. Serve it locally:
# Using Python
cd docs
python -m http.server 8000
# Or use any other static server
# Visit http://localhost:8000- Parks - Israeli national parks and nature reserves
- Tuuli - Community hiking database
- KKL - Jewish National Fund trails
Frontend Code is intentionally informal and flexible ("vibe-coded"):
- Prioritizes functionality and rapid iteration
- Not heavily structured or commented
Backend Code is hand-written without external frameworks:
- Custom scrapers tailored to each source's HTML structure
- No database layer, so the site can be hosted static
- Direct BeautifulSoup parsing
Both approaches work well for this project's scale and serve as reference implementations.
To add a new data source:
- Create a new scraper file in
backend/(e.g.,new_source.py) - Implement
get_all_hikes()function returninglist[HikeInfo] - Import and add to parallel threads in
main.py - Run
main.pyto generate updatedhikes.json
The frontend is hosted on GitHub Pages via the docs/ folder. Push to trigger automatic deployment.
Happy Hiking ⛰️🏕️🥾