Skip to content

Conversation

@pattonw
Copy link
Contributor

@pattonw pattonw commented Jul 11, 2025

Add support for zarr 3

Waiting on iohub>3.0 for OME-Zarr support with zarr 3:
https://github.com/czbiohub-sf/iohub/releases

will and others added 30 commits February 11, 2026 00:16
…nly case

The elif branch handling an upper-bound-only ROI dimension was checking
roi.begin[dim] (always False at that point) instead of roi.end[dim],
causing the upper-bound constraint to be silently dropped.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Roi uses half-open intervals where end is exclusive. SQL BETWEEN is
inclusive on both ends, so nodes exactly at roi.end were incorrectly
included. Replace with >= begin AND < end.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The __attr_query() call already generates WHERE conditions for all
attr_filter entries. The subsequent for-loop over attr_filter appended
the same conditions again, producing redundant SQL like
"WHERE foo=1 AND foo=1".

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Same issue as read_nodes — __attr_query() already generates the full
filter clause, but the subsequent for-loop re-appended identical
conditions.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
write_nodes was hardcoding fail_if_exists=True in the _insert_query
call, silently ignoring the caller's parameter. Duplicate node inserts
with fail_if_exists=False would crash instead of being ignored.

Add test_graph_duplicate_insert_behavior to verify both flags work
correctly for nodes and edges.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Also adds a Join query to handle cases where edges are fetched by roi and not by list of nodes. This is more efficient due to having a single round trip query.
Previous behavior was to silently ignore `roi`.
Reading by ROI can be significantly more efficient that reading by node list since the node list can be huge and would need to be serialized to a string.
This allows testing on dbs other than simply a locally running db.
There is now a bulk version of `write_nodes`, `write_edges`, and `write_graph`. These are faster but do not support some features such as fail_if_exists, and thus require more care from the user to guarantee data being passed is valid.
There are also helper context managers that will drop and rebuild indexes and drop/re-add synchronised commits that can also be used to further speed up writes.

Tests have been expanded to make sure that the new api matches the features of the base implementation and to test that it is actually faster.
…nections.

Fixes a very frustraiting permanent hang that can occur due to unclosed postgresql connections
… large numbers like fragment ids much easier
  - sql_graph_database.py: Refactored __init__ to declare instance attributes with non-optional type annotations (str,
  int, bool, Roi, list[str]). Refactored __load_metadata to take optional overrides as params and always assign from
  stored metadata. Removed all # type: ignore comments.
  - sqlite_graph_database.py: Removed # type: ignore comments, fixed position_columns = self.position_attribute bug
  (should be [self.position_attribute]), fixed parameter name mismatch (query_attrs -> attrs), replaced .size access with
   len() on already-computed array columns.
  - pgsql_graph_database.py: Fixed dict[str, type] -> dict[str, AttributeType] in __init__ signature, removed stale #
  type: ignore comments.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant