A persistent memory and self-improvement system that transforms Home Assistant's conversation agent from a stateless chatbot into an intelligent assistant that remembers, learns, monitors, and maintains your smart home over time.
Built and battle-tested on a Raspberry Pi 4 (2GB RAM) running HAOS. No external databases, no cloud storage, no paid services beyond what you already use.
Home Assistant's conversation agents (Gemini, OpenAI, etc.) have no memory between interactions. Every conversation starts from zero. PERMEAR fixes that with a file-based memory architecture that gives your agent a persistent soul, user profiles, learned insights, and the ability to create automations and monitor system health — all through local JSON files, Python scripts, and HA automations.
The agent evolves from household assistant to system caretaker — it monitors HA health, detects errors (including its own), checks for updates, autodiscovers entities, and can create native HA automations with user approval.
MEMORY (persistent JSON files)
├── guidelines.json ← IMMUTABLE constitution (chmod 444)
├── soul.json ← Agent personality (edited weekly by agent)
├── users.json ← Household profiles (edited weekly + quick-learn)
├── insights.json ← Detected patterns (edited weekly)
├── monitored_entities.json ← Single source of truth for entities
│ monitor:true → pre-briefing reads state
│ events:[] → buffer logs state changes
└── daily/
└── monday..sunday.json ← 7-day rotating event logs
SCRIPTS (all import from permear_config.py)
├── permear_config.py ← Centralized paths and constants
├── append_daily.py ← Log events/interactions/memories
├── build_briefing.py ← Daily briefing prompt (21h)
├── build_prebriefing.py ← Proactive evaluation (30min) + SELF_ERRORS
├── build_weekly_prompt.py ← Weekly compilation prompt (Sunday)
├── update_daily_memory.py ← Save extracted memories
├── weekly_compile.py ← Apply LLM edits to perennials
├── apply_quick_learning.py ← Instant restriction from rejections
├── discover_entities.py ← Autodiscover exposed entities
├── generate_buffer_events.py ← Regenerate triggers from JSON
├── ha_log_monitor.py ← Parse logs: SELF_ERRORS vs ERRORS
├── ha_updates_check.py ← Check HA/addon updates
├── manage_agent_automations.py ← Create/remove HA automations
├── sensor_current_day.py ← HA sensor: current day memory
└── sensor_perennial.py ← HA sensor: perennial files
CYCLES
├── Every 30 min (08-20h) ── Pre-briefing: health + house evaluation
├── Daily 21h ────────────── Briefing: day summary + updates + memories
├── Daily 06:00 ──────────── Entity autodiscovery
├── Sunday 00:05 ─────────── Weekly compile: self-improvement
└── On demand ────────────── Telegram chat + voice commands
The pre-briefing starts noisy and becomes precise over time. Reply "that's irrelevant" and the agent learns immediately.
Daily files named by weekday. Next Monday overwrites this Monday. No cleanup needed.
monitored_entities.json serves two roles: monitor: true for pre-briefing state reading, events for buffer trigger generation. Edit one file, run generate_buffer_events.py, both systems update.
The log monitor classifies errors from PERMEAR components (telegram_bot, conversation, automation, shell_command) as SELF_ERRORS. When detected, the pre-briefing prompt instructs the agent to report what went wrong, what its last action was, and suggest a fix. External HA errors remain as regular ERRORS.
The agent proposes automations via Telegram, you approve, they activate via automation.reload.
guidelines.json (chmod 444) defines the agent's operating boundaries. It cannot change them.
- Home Assistant 2023.7+
- A conversation agent (Gemini 2.5 Flash recommended — free tier sufficient)
- Telegram bot in HA (polling mode)
- Python 3 + PyYAML (included in HAOS)
- Long-lived HA access token
max_tokensset to 8192+ in your LLM integration
mkdir -p /config/memory/daily /config/scripts /config/logs
touch /config/automations/agent_automations.yaml# configuration.yaml — must be directory-based:
automation: !include_dir_merge_list automations/HA sidebar → username → Long-Lived Access Tokens → Create → "PERMEAR"
echo "YOUR_TOKEN" > /config/.permear_token
chmod 600 /config/.permear_tokenGoogle Generative AI: Settings → Configure → uncheck "Recommended model settings" → Maximum tokens: 8192
scripts/*.py → /config/scripts/
memory/*.json → /config/memory/
automations/*.yaml → /config/automations/
chmod 444 /config/memory/guidelines.jsonpermear_config.py— Paths,DAYSfor language,SELF_COMPONENTS. See Customization Guide.soul.json— Agent personality.users.json— Household profiles.guidelines.json— Edit before locking.
Copy contents of configuration_additions.yaml.
YOUR_CHAT_ID, YOUR_AGENT_ID (verify in Developer Tools → Services → conversation.process), person.YOUR_PERSON.
SYSTEM MONITORING: You monitor HA health. Critical errors: notify immediately.
SELF_ERRORS are from your own actions — always report what you think went wrong.
Updates: mention in daily briefing only. New devices: ask user to name them.
AUTOMATIONS: Create with manage_agent_auto_create, remove with manage_agent_auto_remove,
list with manage_agent_auto_list. ALWAYS ask confirmation before creating.
ENTITY MONITORING: "monitor [entity]" → add_monitored_entity.
"stop monitoring [entity]" → remove_monitored_entity.
- Never use sentence triggers (
platform: conversation). - Verify your agent_id — often
conversation.google_ai_conversation, NOTgoogle_generative_ai. telegram_bot.send_message:chat_id, nottarget.- HA triggers are static. Define events in JSON, run
generate_buffer_events.py. max_tokensmust be 8192+ for weekly compilation.ha_updates_check.pyonly works inside HAOS container (SUPERVISOR_TOKEN).- Use
| truncate()not[:255]in HA templates. - All response_variable stdout must use
| default('') | trim | default('fallback')to prevent empty message errors. - Gemini ignores format with long conversation history. Reset
conversation_idor inject in message. discover_entities.pyfilters byshould_exposein entity registry.- SELF_ERRORS flag errors from components the agent uses directly. Customize in
permear_config.py.
- SELF_ERRORS awareness:
ha_log_monitor.pynow classifies errors from PERMEAR components (telegram_bot, conversation, automation, shell_command) asSELF_ERRORS— separate from external HAERRORS. The pre-briefing prompt instructs the agent to report what it thinks went wrong, what its last action was, and suggest a fix. SELF_COMPONENTSinpermear_config.py: Configurable list of components whose errors are flagged as self-caused.guidelines.jsonupdated: Monitoring guidelines now include SELF_ERRORS handling rule.
- Centralized configuration:
permear_config.pywith all paths and constants. All scripts import from it. Users with non-standard directories edit one file.
monitored_entities.jsonas single source of truth:monitorfor pre-briefing,eventsfor buffer triggers.generate_buffer_events.py: Regenerates YAML between markers.discover_entities.pypreservesmonitorandeventsfields.- Empty speech fix:
| default('') | trimwith fallback message. generate_buffer_eventsshell_command added.
- Agent ID fix,
should_exposefilter,apply_usersany-field diff, truncation detection,| truncate()fix, prompt compaction.
- Agent as system caretaker. HA health monitoring, update checking, entity autodiscovery, native automation creation. Allowed actions removed.
- Telegram context injection, briefing memory timing, quick-learn localization.
- Initial release.
MIT — Use it, fork it, improve it.
Architecture designed in collaboration with Claude (Anthropic).