Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 34 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,40 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/)

## [Unreleased]

## [0.4.109] - 2026-03-19

### Hardened
- **Encryption helper robustness** — `DataEncryptor.encrypt()` and `decrypt()` handle `None` inputs gracefully. Large-payload warnings alert operators before performance-sensitive paths. Debug logging no longer includes raw metadata.

## [0.4.108] - 2026-03-19

### Hardened
- **Delete signal authorization** — Inbound P2P delete signals for channel messages now verify requester ownership (message author or channel admin). Revocation signals are prioritised in the store-and-forward queue to survive offline-peer overflow.

### Performance
- **Sidebar rendering efficiency** — DM contacts and peer list use DocumentFragment batching and render-key diffing to skip unnecessary DOM writes. Attention poll interval relaxed from 2.5s to 5s. GPU compositing hints added to animated sidebar elements.

## [0.4.107] - 2026-03-19

### Hardened
- **Trust boundary enforcement** — Delete-signal compliance and violation handlers verify signal ownership before adjusting trust scores. Manually penalised peers are locked from automated trust recovery. Trust score operations validate against non-existent records.
- **P2P input validation** — Inbound messages enforce payload size limits (512 KB total, 256 KB content, 512-byte IDs). Feed posts with private or custom visibility are rejected at the P2P layer. Author identity is verified against origin peer on inbound feed posts. Delete signal handlers verify requester ownership across all data types.
- **API authentication tightening** — All P2P status endpoints require authentication. Session-based API key generation validates CSRF tokens.
- **Feed visibility defaults** — `can_view()` defaults to untrusted, requiring callers to pass explicit trust context. `get_user_posts()` applies standard visibility filters. Feed statistics include custom-visibility posts the viewer has permission to see.

### Performance
- **Channel rendering** — O(n) orphan-reply check via Set lookup (previously O(n²)). `displayMessages` returns its Promise for proper search-banner chaining.

## [0.4.106] - 2026-03-18

### Changed
- **Privacy-first trust baseline** — Unknown peers now start at trust score 0 (pending review) instead of 100 (implicitly trusted). `is_peer_trusted()` requires an explicit trust row before a peer qualifies. The Trust UI now separates connected-but-unreviewed peers into a "Potential peers" queue rather than placing them into trust tiers by default.
- **Feed defaults to private** — Feed post creation defaults to `private` ("Only Me") across UI, API, and MCP. Agents and users that omit visibility no longer broadcast unintentionally. Helper text in the feed composer clarifies the default and explains trusted sharing.
- **Trusted feed visibility consistency** — All feed query paths (`get_user_feed`, `search_posts`, `count_unread_posts`, `get_feed_statistics`, `_get_smart_feed`, `get_posts_since`) now include `trusted` visibility so trusted posts are no longer inconsistently omitted.
- **Targeted feed propagation** — `broadcast_feed_post()` now computes target peers by visibility scope: public/network → all connected, trusted → only peers meeting the trust threshold, private/custom → no P2P broadcast. Catch-up sync includes trusted posts only for explicitly trusted peers.
- **Feed visibility narrowing revocation** — When a post is edited from a broader to a narrower visibility, peers that are no longer in scope receive a delete signal. Update call sites in UI, API, and MCP now pass `previous_visibility` so revocation logic can run.
- **Operator copy clarity** — Settings advise using a separate node for public relay. Channel privacy descriptions clarify that Guarded is moderated/mesh-visible (not private) and Private is for sensitive work.

## [0.4.105] - 2026-03-18

### Fixed
Expand Down
4 changes: 3 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
</p>

<p align="center">
<img src="https://img.shields.io/badge/version-0.4.105-blue" alt="Version 0.4.105">
<img src="https://img.shields.io/badge/version-0.4.109-blue" alt="Version 0.4.109">
<img src="https://img.shields.io/badge/python-3.10%2B-blue" alt="Python 3.10+">
<img src="https://img.shields.io/badge/license-Apache%202.0-green" alt="Apache 2.0 License">
<img src="https://img.shields.io/badge/encryption-ChaCha20--Poly1305-blueviolet" alt="ChaCha20-Poly1305">
Expand All @@ -31,6 +31,8 @@

> **Early-stage software.** Canopy is actively developed and evolving quickly. Use it for real workflows, but expect sharp edges and keep backups. See [LICENSE](LICENSE) for terms.

> **No tokens, no coins, no crypto.** Canopy is a free, open-source communication tool. It has no cryptocurrency, no blockchain, no token, and no paid tier. Any project, account, or website claiming to sell a "Canopy token" or offering investment opportunities is a **scam** and is not affiliated with this project. Report imposters to [GitHub Support](https://support.github.com).

---

## At A Glance
Expand Down
2 changes: 1 addition & 1 deletion canopy/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
Development: AI-assisted implementation (Claude, Codex, GitHub Copilot, Cursor IDE, Ollama)
"""

__version__ = "0.4.105"
__version__ = "0.4.109"
__protocol_version__ = 1
__author__ = "Canopy Contributors"
__license__ = "Apache-2.0"
Expand Down
13 changes: 10 additions & 3 deletions canopy/api/routes.py
Original file line number Diff line number Diff line change
Expand Up @@ -1939,6 +1939,7 @@ def system_info():

# P2P Network endpoints
@api.route('/p2p/status', methods=['GET'])
@require_auth(allow_session=True)
def get_p2p_status():
"""Get P2P network status."""
*_, p2p_manager = _get_app_components_any(current_app)
Expand Down Expand Up @@ -2862,6 +2863,7 @@ def generate_api_key():
else:
session_user = session.get('user_id')
if session_user:
validate_csrf_request()
user_id = session_user
else:
return jsonify({
Expand Down Expand Up @@ -2904,6 +2906,9 @@ def generate_api_key():
return jsonify({'error': 'Failed to generate API key'}), 500

except Exception as e:
from werkzeug.exceptions import HTTPException
if isinstance(e, HTTPException):
raise
logger.error(f"Failed to generate API key: {e}")
return jsonify({'error': 'Internal server error'}), 500

Expand Down Expand Up @@ -3600,7 +3605,8 @@ def get_peer_trust(peer_id):
return jsonify({
'peer_id': peer_id,
'trust_score': score,
'is_trusted': is_trusted
'is_trusted': is_trusted,
'has_explicit_score': trust_manager.has_explicit_trust_score(peer_id),
})

except Exception as e:
Expand Down Expand Up @@ -3751,7 +3757,7 @@ def create_feed_post():

content = data.get('content')
post_type = data.get('post_type', 'text')
visibility = data.get('visibility', 'network')
visibility = data.get('visibility', 'private')
permissions = data.get('permissions', [])
metadata = data.get('metadata')
expires_at = data.get('expires_at')
Expand All @@ -3776,7 +3782,7 @@ def create_feed_post():
try:
vis = PostVisibility(visibility)
except ValueError:
vis = PostVisibility.NETWORK
vis = PostVisibility.PRIVATE

# Auto-detect poll posts when content matches poll format
if pt == PostType.TEXT and parse_poll(content or ''):
Expand Down Expand Up @@ -6409,6 +6415,7 @@ def update_feed_post(post_id):
content=updated.content,
post_type=updated.post_type.value,
visibility=updated.visibility.value,
previous_visibility=post.visibility.value if getattr(post, 'visibility', None) else None,
timestamp=updated.created_at.isoformat() if hasattr(updated.created_at, 'isoformat') else str(updated.created_at),
metadata=updated.metadata,
expires_at=updated.expires_at.isoformat() if getattr(updated, 'expires_at', None) else None,
Expand Down
133 changes: 112 additions & 21 deletions canopy/core/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -4062,14 +4062,21 @@ def _on_catchup_request(channel_timestamps, from_peer,
# Feed posts newer than what the peer has
try:
since_feed = feed_latest or '1970-01-01 00:00:00'
visible_feed_modes = ['network', 'public']
try:
if trust_manager and trust_manager.is_peer_trusted(str(from_peer or '').strip()):
visible_feed_modes.append('trusted')
except Exception:
pass
with db_manager.get_connection() as conn:
placeholders = ",".join("?" for _ in visible_feed_modes)
rows = conn.execute(
"SELECT id, author_id, content, content_type, "
"visibility, metadata, created_at, expires_at "
"FROM feed_posts WHERE created_at > ? AND "
"(visibility = 'network' OR visibility = 'public') "
f"visibility IN ({placeholders}) "
"ORDER BY created_at ASC LIMIT 200",
(since_feed,)
(since_feed, *visible_feed_modes)
).fetchall()
if rows:
feed_posts = []
Expand Down Expand Up @@ -4899,6 +4906,36 @@ def _on_p2p_feed_post(post_id, author_id, content, post_type,
display_name, from_peer):
"""Store an incoming P2P feed post locally. Updates content/metadata when post already exists (edit broadcast)."""
try:
# --- Input validation ---

# Reject posts with private/custom visibility over P2P
vis_str = str(visibility or '').lower()
if vis_str in ('private', 'custom'):
logger.warning(f"Rejecting P2P feed post {post_id} with visibility={vis_str} from {from_peer}")
return

# ID length limits
for label, val in [('post_id', post_id), ('author_id', author_id)]:
if val and len(str(val).encode('utf-8')) > 512:
logger.warning(f"Rejecting P2P feed post: {label} too long from {from_peer}")
return

# Content size limit (256 KB)
if content and len(str(content).encode('utf-8')) > 256 * 1024:
logger.warning(f"Rejecting oversized P2P feed post content from {from_peer}")
return

# Metadata size limit (64 KB)
if metadata:
import json as _json
try:
meta_size = len(_json.dumps(metadata).encode('utf-8'))
if meta_size > 64 * 1024:
logger.warning(f"Rejecting oversized P2P feed post metadata from {from_peer}")
return
except Exception:
pass

# Ensure shadow user exists (reuse channel message logic)
feed_origin_peer = ''
try:
Expand All @@ -4914,6 +4951,18 @@ def _on_p2p_feed_post(post_id, author_id, content, post_type,
allow_origin_reassign=True,
)

# Author-ID spoofing prevention: verify the claimed author
# belongs to the sending peer
author_row = db_manager.get_user(author_id)
if author_row:
origin = (author_row.get('origin_peer') or '').strip()
if origin and origin != str(from_peer or '').strip():
logger.warning(
f"Rejecting P2P feed post {post_id}: author {author_id} "
f"origin_peer={origin} != from_peer={from_peer}"
)
return

# Normalise timestamp
normalised_ts = None
created_dt = None
Expand Down Expand Up @@ -6148,6 +6197,14 @@ def _on_p2p_direct_message(sender_id, recipient_id, content,
p2p_manager.on_direct_message = _on_p2p_direct_message

# --- Delete signal handler ---
def _requester_owns_user(owner_user_id: str, from_peer_id: str) -> bool:
"""Check that owner_user_id's origin_peer matches from_peer_id."""
urow = db_manager.get_user(owner_user_id)
if not urow:
return False
origin = (urow.get('origin_peer') or '').strip()
return bool(origin) and origin == str(from_peer_id or '').strip()

def _on_delete_signal(signal_id, data_type, data_id, reason,
requester_peer, is_ack, ack_status, from_peer):
"""Handle incoming DELETE_SIGNAL from a peer.
Expand All @@ -6159,6 +6216,12 @@ def _on_delete_signal(signal_id, data_type, data_id, reason,
We update our local signal status and adjust trust score.
"""
try:
# ID length limits
for label, val in [('signal_id', signal_id), ('data_id', data_id)]:
if val and len(str(val).encode('utf-8')) > 512:
logger.warning(f"Rejecting delete signal: {label} too long from {from_peer}")
return

if is_ack:
# --- Acknowledgment from a peer ---
status = ack_status or 'acknowledged'
Expand Down Expand Up @@ -6192,26 +6255,48 @@ def _on_delete_signal(signal_id, data_type, data_id, reason,

elif data_type == 'channel_message':
# Delete a specific channel message (explicit type).
# Remove FK references first: likes and parent_message_id.
# Security: only the message's origin peer or the channel's
# origin peer (admin) may request deletion.
try:
channel_id = None
with db_manager.get_connection() as conn:
row = conn.execute(
"SELECT channel_id FROM channel_messages WHERE id = ?",
"SELECT channel_id, user_id FROM channel_messages WHERE id = ?",
(data_id,),
).fetchone()
if row:
channel_id = row['channel_id'] if hasattr(row, 'keys') else row[0]
conn.execute("DELETE FROM likes WHERE message_id = ?", (data_id,))
conn.execute(
"UPDATE channel_messages SET parent_message_id = NULL WHERE parent_message_id = ?",
(data_id,),
)
cur = conn.execute(
"DELETE FROM channel_messages WHERE id = ?",
(data_id,))
conn.commit()
deleted = cur.rowcount > 0
msg_user_id = row['user_id'] if hasattr(row, 'keys') else row[1]
requester = str(requester_peer or from_peer or '').strip()
msg_authorized = _requester_owns_user(msg_user_id, requester)
ch_row = conn.execute(
"SELECT origin_peer FROM channels WHERE id = ?",
(channel_id,),
).fetchone()
ch_origin = ''
if ch_row:
ch_origin = (ch_row['origin_peer'] if hasattr(ch_row, 'keys') else ch_row[0]) or ''
ch_admin = bool(ch_origin) and requester == ch_origin
if not msg_authorized and not ch_admin:
logger.warning(
"SECURITY: Rejected channel_message delete for %s "
"(requester=%s, msg_user=%s, ch_origin=%s)",
data_id, requester, msg_user_id, ch_origin,
)
deleted = False
else:
conn.execute("DELETE FROM likes WHERE message_id = ?", (data_id,))
conn.execute(
"UPDATE channel_messages SET parent_message_id = NULL WHERE parent_message_id = ?",
(data_id,),
)
cur = conn.execute(
"DELETE FROM channel_messages WHERE id = ?",
(data_id,))
conn.commit()
deleted = cur.rowcount > 0
else:
deleted = True # Already gone, idempotent
if deleted and channel_id:
try:
channel_manager._emit_channel_user_event(
Expand Down Expand Up @@ -6241,15 +6326,21 @@ def _on_delete_signal(signal_id, data_type, data_id, reason,
logger.error(f"Failed to delete file {data_id}: {del_err}")

elif data_type in ('feed_post', 'post'):
# Delete a feed post
# Delete a feed post — verify requester owns the author
try:
deleted_post = feed_manager.get_post(data_id) if feed_manager else None
with db_manager.get_connection() as conn:
cur = conn.execute(
"DELETE FROM feed_posts WHERE id = ?",
(data_id,))
conn.commit()
deleted = cur.rowcount > 0
if deleted_post and not _requester_owns_user(deleted_post.author_id, from_peer):
logger.warning(
f"SECURITY: Rejected feed post delete for {data_id}: "
f"author={deleted_post.author_id} not owned by {from_peer}"
)
elif deleted_post or not feed_manager:
with db_manager.get_connection() as conn:
cur = conn.execute(
"DELETE FROM feed_posts WHERE id = ?",
(data_id,))
conn.commit()
deleted = cur.rowcount > 0
if deleted and feed_manager and deleted_post:
try:
feed_manager._emit_post_event(
Expand Down
Loading
Loading