diff --git a/README.md b/README.md index 67d9d71..c94231c 100644 --- a/README.md +++ b/README.md @@ -9,7 +9,9 @@ **fluidize-python** is a library for building modular, reproducible scientific computing pipelines. It provides a unified interface to a wide range of physical simulation tools, eliminating the need to navigate the inconsistent, incomplete instructions that often vary from tool to tool. -This library marks our first step toward AI-orchestrated scientific computing. By standardizing tools and practices within our framework, AI agents can automatically build, configure, and execute computational pipelines across domains and simulation platforms. Our goal is to improve today’s simulation tools so AI can assist researchers and scientists in accelerating the pace of innovation and scientific discovery. +This library marks our first step toward AI-orchestrated scientific computing. By standardizing tools and practices within our framework, AI agents can automatically build, configure, and execute computational pipelines across domains and simulation platforms. + +Our goal is to improve today’s simulation tools so AI can assist researchers and scientists in accelerating the pace of innovation and scientific discovery. ## Quick Start @@ -54,6 +56,7 @@ Students and researchers face significant barriers when working with different s - **Reproducibility issues** – Sharing and reproducing experiments is frequently cumbersome and error-prone. - **Scaling friction** – Moving from a local prototype to a cloud environment or dedicated compute cluster can be slow and difficult. + ## The Solution Fluidize provides a standardized wrapper that turns complex scientific software into modular components. This makes it possible to: @@ -64,10 +67,16 @@ Fluidize provides a standardized wrapper that turns complex scientific software All of this works with **minimal or no changes** to the existing codebase, allowing our framework to scale effortlessly to any repository. + ## Architecture +At Fluidize, we believe strong organization leads to better reproducibility and scalability. + +We treat each simulation pipeline as an individual project. Within projects, each pipeline is treated as a DAG (directed acyclic graph), where nodes represent individual pieces of scientific software (e.g. inputs, solvers, visualization tools, etc.) and edges represent data flow between nodes. + + ### Nodes -The foundational building blocks of Fluidize. Each node encapsulates a computational unit with: +Nodes are the foundational building blocks of simulation pipelines. Each node represents a computational unit with: | File | Purpose | |------|---------| @@ -84,18 +93,20 @@ The foundational building blocks of Fluidize. Each node encapsulates a computati - No source code modification required - Automated node generation support (Public launch soon) + ### Projects -The project currently hosts a simple layer for composing and managing multiple nodes: + +Projects store a simple data layer for managing individual modules within a pipeline. | File | Purpose | |------|---------| -| `graph.json` | Node connectivity and data flow definition | +| `graph.json` | Node (scientific software) and edge (data flow) definitions | | `metadata.yaml` | Project description and configuration | -Docker engine is used for local execution. With API calls, we use the Kubernetes engine with Argo Workflow Manager. - +### Runs +Pipelines can be executed both locally and on the cloud. Local execution is handled by Docker engine. Cloud execution is routed through our API, and uses the Kubernetes engine with Argo Workflow Manager. ## Documentation @@ -107,18 +118,22 @@ Comprehensive documentation is available at [https://Fluidize-Inc.github.io/flui - [Project Orchestration](https://Fluidize-Inc.github.io/fluidize-python/projects) - [API Reference](https://Fluidize-Inc.github.io/fluidize-python/api) + ## Contributing -We would love contributions and collaborations! Please see our [Contributing Guide](CONTRIBUTING.md) for details. +We would love to collaborate with you! Please see our [Contributing Guide](CONTRIBUTING.md) for details. + +Also - we would love to help streamline your pipeline! Please reach out to us at [founders@fluidize.ai](mailto:founders@fluidize.ai). + -Also - we would love to help streamline your research pipeline! Please reach out at [henry@fluidize.ai](mailto:henry@fluidize.ai) or [henrybae@g.harvard.edu](mailto:henrybae@g.harvard.edu). +## Vision and Roadmap -## Roadmap +This is just the beginning of what we believe will be a really exciting new era for how we conduct research and make discoveries in science. -This is just the beginning of what we think is a really exciting new era for how we learn science and do research. We will be releasing the following tools built from this framework: +By standardizing tools, we hope to significantly increase the effectiveness of AI in research and discovery. Soon, we will be releasing the following tools built from this framework: -- **Fluidize Playground**: Automatically explore and build simulation pipelines with natural language. -- **Auto-Fluidize**: Automatically convert obscure scientific software to run anywhere +- **Auto-Fluidize**: Automatically convert any scientific software to run anywhere with our framework. +- **Fluidize AI Playground**: Explore and build simulation pipelines with natural language. ## License diff --git a/docs/getting-started/client.md b/docs/core-modules/client.md similarity index 83% rename from docs/getting-started/client.md rename to docs/core-modules/client.md index 197a6b7..b4411ae 100644 --- a/docs/getting-started/client.md +++ b/docs/core-modules/client.md @@ -1,4 +1,4 @@ -# Fluidize Client +# Client Module The Fluidize Client is the primary interface to create and edit projects. There are two interfaces for this, with more on the way. @@ -6,22 +6,22 @@ The Fluidize Client is the primary interface to create and edit projects. There - **API Mode**: Runs on Fluidize API to manage projects and workflows in the cloud. -## Client API - -### FluidizeClient ::: fluidize.client.FluidizeClient options: show_source: false + heading_level: 3 + show_root_heading: true members: - mode - adapters - projects - runs -### FluidizeConfig ::: fluidize.config.FluidizeConfig options: show_source: false + heading_level: 3 + show_root_heading: true members: - is_local_mode - is_api_mode diff --git a/docs/core-modules/execute.md b/docs/core-modules/execute.md deleted file mode 100644 index 726fe6e..0000000 --- a/docs/core-modules/execute.md +++ /dev/null @@ -1,25 +0,0 @@ -# Execute Module - -## Execution Management - -### ExecutionManager -::: fluidize.core.modules.execute.ExecutionManager - options: - show_source: false - -## Execution Utilities - -### PathConverter -::: fluidize.core.modules.execute.PathConverter - options: - show_source: false - -### EnvironmentBuilder -::: fluidize.core.modules.execute.EnvironmentBuilder - options: - show_source: false - -### VolumeBuilder -::: fluidize.core.modules.execute.VolumeBuilder - options: - show_source: false diff --git a/docs/core-modules/graph.md b/docs/core-modules/graph.md index c51fbd2..0919c44 100644 --- a/docs/core-modules/graph.md +++ b/docs/core-modules/graph.md @@ -1,11 +1,11 @@ # Graph Module -## Graph Management - -### ProjectGraph -::: fluidize.managers.project_graph.ProjectGraph +## Graph Manager +::: fluidize.managers.graph.GraphManager options: show_source: false + heading_level: 3 + show_root_heading: true members: - get - add_node @@ -14,26 +14,34 @@ - add_edge - delete_edge -## Graph Processing - -### GraphProcessor +## Graph Processor ::: fluidize.core.modules.graph.GraphProcessor options: show_source: false + heading_level: 3 + show_root_heading: true ## Graph Types -### GraphData -::: fluidize.core.types.GraphData +::: fluidize.core.types.graph.GraphData options: - show_source: false + heading_level: 3 + show_root_heading: true + extra: + show_attributes: true + -### GraphNode -::: fluidize.core.types.GraphNode +::: fluidize.core.types.graph.GraphNode options: - show_source: false + heading_level: 3 + show_root_heading: true + extra: + show_attributes: true + -### GraphEdge -::: fluidize.core.types.GraphEdge +::: fluidize.core.types.graph.GraphEdge options: - show_source: false + heading_level: 3 + show_root_heading: true + extra: + show_attributes: true diff --git a/docs/core-modules/index.md b/docs/core-modules/index.md new file mode 100644 index 0000000..f75ba97 --- /dev/null +++ b/docs/core-modules/index.md @@ -0,0 +1,43 @@ +# Core Modules + +The Fluidize library is composed of a set of core modules that provide a high-level interface for managing Fluidize resources. These modules are designed to be used together to build and execute scientific computing pipelines. + +## [Client](client.md) + +The **Fluidize Client** provides a unified, high-level interface for managing Fluidize resources in both local and cloud API modes. It serves as the primary entry point for creating and running pipelines across these environments. + +## [Projects](projects.md) + +The **Projects** module provides tools for managing project lifecycles: + +- [**Registry Manager**](projects.md#fluidize.managers.registry.RegistryManager): + Handles the user’s complete project registry, with functionality to create, edit, and delete projects. + +- [**Project Manager**](projects.md#fluidize.managers.project.ProjectManager): + Focuses on individual projects, managing the project graph, nodes, and runs, and supporting execution of project-specific workflows. + +## [Graph](graph.md) + +The **Graph** module provides tools for managing the project graph, which is a representation of the simulation pipeline. + +In a Fluidize project, pipelines are represented as a directed acyclic graph (DAG) where each node represents a module simulation and each edge represents the flow of data between nodes: + +- [**Graph Manager**](graph.md#fluidize.managers.graph.GraphManager): + Manages the project graph, and provides high level functionality to create, edit, and delete nodes and edges. + +- [**Graph Processor**](graph.md#fluidize.managers.graph.graph_processor.GraphProcessor): + Manages specific operations on the graph data structure within the local filesystem. + +## [Node](node.md) + +The **Node** module provides tools for managing the metadata, properties, and parameters of individual nodes within a project. + +## [Run](run.md) + +The **Run** module provides tools for managing simulation pipeline runs within a project: + +- [**Runs Manager**](run.md#fluidize.managers.run.RunsManager): + Manages the high level execution of runs and retrieving run status. + +- [**Project Runner**](run.md#fluidize.core.modules.run.project.ProjectRunner): + Manages the specific execution details of a project pipeline, including environment preparation and node execution order. diff --git a/docs/core-modules/node.md b/docs/core-modules/node.md new file mode 100644 index 0000000..88d11a1 --- /dev/null +++ b/docs/core-modules/node.md @@ -0,0 +1,7 @@ +# Node Module + +::: fluidize.managers.node.NodeManager + options: + show_source: false + heading_level: 3 + show_root_heading: true diff --git a/docs/core-modules/projects.md b/docs/core-modules/projects.md index 8f8df34..6e8c8ff 100644 --- a/docs/core-modules/projects.md +++ b/docs/core-modules/projects.md @@ -1,13 +1,15 @@ # Projects Module -## Project Management - -### Projects Manager -::: fluidize.managers.projects.Projects +## Registry +::: fluidize.managers.registry.RegistryManager options: show_source: false + heading_level: 3 + show_root_heading: true -### Project Class -::: fluidize.managers.project_manager.Project +## Project +::: fluidize.managers.project.ProjectManager options: show_source: false + heading_level: 3 + show_root_heading: true diff --git a/docs/core-modules/run.md b/docs/core-modules/run.md index 4192bc6..dc6be19 100644 --- a/docs/core-modules/run.md +++ b/docs/core-modules/run.md @@ -2,10 +2,11 @@ ## Run Management -### ProjectRuns -::: fluidize.managers.project_runs.ProjectRuns +::: fluidize.managers.runs.RunsManager options: show_source: false + heading_level: 3 + show_root_heading: true members: - run_flow - list @@ -13,12 +14,15 @@ ## Run Execution -### RunJob ::: fluidize.core.modules.run.RunJob options: show_source: false + heading_level: 3 + show_signature: false + show_root_heading: true -### ProjectRunner -::: fluidize.core.modules.run.ProjectRunner +::: fluidize.core.modules.run.project.ProjectRunner options: show_source: false + heading_level: 3 + show_root_heading: true diff --git a/docs/getting-started/projects-nodes.md b/docs/getting-started/projects-nodes.md deleted file mode 100644 index 8d24def..0000000 --- a/docs/getting-started/projects-nodes.md +++ /dev/null @@ -1,5 +0,0 @@ -# Projects and Nodes - -*Content coming soon...* - -For API details, see the [Projects module documentation](../core-modules/projects.md). diff --git a/docs/getting-started/quickstart.md b/docs/getting-started/quickstart.md new file mode 100644 index 0000000..bad5562 --- /dev/null +++ b/docs/getting-started/quickstart.md @@ -0,0 +1 @@ +# Getting Started diff --git a/docs/index.md b/docs/index.md index 6fcb70a..93c2428 100644 --- a/docs/index.md +++ b/docs/index.md @@ -1,8 +1,94 @@ -# fluidize-python +# Fluidize -[![Release](https://img.shields.io/github/v/release/Fluidize-Inc/fluidize-python)](https://img.shields.io/github/v/release/Fluidize-Inc/fluidize-python) -[![Build status](https://img.shields.io/github/actions/workflow/status/Fluidize-Inc/fluidize-python/main.yml?branch=main)](https://github.com/Fluidize-Inc/fluidize-python/actions/workflows/main.yml?query=branch%3Amain) -[![Commit activity](https://img.shields.io/github/commit-activity/m/Fluidize-Inc/fluidize-python)](https://img.shields.io/github/commit-activity/m/Fluidize-Inc/fluidize-python) -[![License](https://img.shields.io/github/license/Fluidize-Inc/fluidize-python)](https://img.shields.io/github/license/Fluidize-Inc/fluidize-python) +[![Python](https://img.shields.io/badge/python-3.9%2B-blue?style=for-the-badge&logo=python&logoColor=white)](https://python.org) +[![PyPI](https://img.shields.io/pypi/v/fluidize?style=for-the-badge&logo=pypi&logoColor=white)](https://pypi.org/project/fluidize/) +[![License](https://img.shields.io/github/license/Fluidize-Inc/fluidize-python?style=for-the-badge)](LICENSE) +[![Documentation](https://img.shields.io/badge/docs-available-brightgreen?style=for-the-badge&logo=gitbook&logoColor=white)](https://Fluidize-Inc.github.io/fluidize-python/) -Python package for automatic generation of scientific computing software pipelines. +## About + + **fluidize-python** is a library for building modular, reproducible scientific computing pipelines. It provides a unified interface to a wide range of physical simulation tools, eliminating the need to navigate the inconsistent, incomplete instructions that often vary from tool to tool. + +This library marks our first step toward AI-orchestrated scientific computing. By standardizing tools and practices within our framework, AI agents can automatically build, configure, and execute computational pipelines across domains and simulation platforms. + +Our goal is to improve today’s simulation tools so AI can assist researchers and scientists in accelerating the pace of innovation and scientific discovery. + +## Installation + +### Prerequesites: + +- Python 3.9+ +- Docker Desktop (for local execution). Download and install Docker Desktop from https://docs.docker.com/desktop/. + + After installation, verify with: + ```bash + docker --version + ``` + + + +### From PyPI +```bash +pip install fluidize +``` + +### From Source +```bash +git clone https://github.com/Fluidize-Inc/fluidize-python.git +cd fluidize-python +make install +``` + +## Run Examples + +Example projects are located in this folder: [examples/](https://github.com/Fluidize-Inc/fluidize-python/tree/main/examples). There you can find an [Jupyter Notebook](https://github.com/Fluidize-Inc/fluidize-python/blob/main/examples/demo.ipynb) of a simple simulation + +## Architecture + +At Fluidize, we believe strong organization leads to better reproducibility and scalability. + +We treat each simulation pipeline as an individual project. Within projects, each pipeline is treated as a DAG (directed acyclic graph), where nodes represent individual pieces of scientific software (e.g. inputs, solvers, visualization tools, etc.) and edges represent data flow between nodes. + + +### Nodes +Nodes are the foundational building blocks of simulation pipelines. Each node represents a computational unit with: + +| File | Purpose | +|------|---------| +| `properties.yaml` | Container configuration, working directory, and output paths | +| `metadata.yaml` | Node description, version, authors, and repository URL | +| `Dockerfile` | Environment setup and dependency installation | +| `parameters.json` | Tunable parameters for experiments | +| `main.sh` | Execution script for the source code | +| `source/` | Original scientific computing code | + +**Key Features:**
+- Predictable input/output paths
+- Modular and extensible design
+- No source code modification required
+- Automated node generation support (Public launch soon) + + +### Projects + +Projects store a simple data layer for managing individual modules within a pipeline. + +| File | Purpose | +|------|---------| +| `graph.json` | Node (scientific software) and edge (data flow) definitions | +| `metadata.yaml` | Project description and configuration | + + +### Runs + +Pipelines can be executed both locally and on the cloud. Local execution is handled by Docker engine. Cloud execution is routed through our API, and uses the Kubernetes engine with Argo Workflow Manager. + +## Contributing + +We would love to collaborate with you! Please see our [Contributing Guide](https://github.com/Fluidize-Inc/fluidize-python/blob/main/CONTRIBUTING.md) for details. + +Also - we would love to help streamline your pipeline! Please reach out to us at [founders@fluidize.ai](mailto:founders@fluidize.ai). + +## License + +This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details. diff --git a/examples/example-projects/MUJOCO/Mujoco-Simulation/parameters.json b/examples/example-projects/MUJOCO/Mujoco-Simulation/parameters.json index 7940dbc..57b41a6 100644 --- a/examples/example-projects/MUJOCO/Mujoco-Simulation/parameters.json +++ b/examples/example-projects/MUJOCO/Mujoco-Simulation/parameters.json @@ -8,9 +8,6 @@ "name": "motor_strength", "latex": null, "location": [ - "source/pinata_simulation.py", - "source/pinata_simulation.py", - "source/pinata_simulation.py", "source/pinata_simulation.py" ], "options": null, diff --git a/fluidize/adapters/local/graph.py b/fluidize/adapters/local/graph.py index 0f5aedc..70ea95f 100644 --- a/fluidize/adapters/local/graph.py +++ b/fluidize/adapters/local/graph.py @@ -7,14 +7,11 @@ from typing import Optional -from fluidize.core.modules.graph.parameters import parse_parameters_from_json from fluidize.core.modules.graph.processor import GraphProcessor from fluidize.core.types.graph import GraphData, GraphEdge, GraphNode -from fluidize.core.types.node import nodeMetadata_simulation, nodeProperties_simulation +from fluidize.core.types.node import nodeMetadata_simulation, nodeParameters_simulation, nodeProperties_simulation from fluidize.core.types.parameters import Parameter from fluidize.core.types.project import ProjectSummary -from fluidize.core.utils.dataloader.data_loader import DataLoader -from fluidize.core.utils.dataloader.data_writer import DataWriter from fluidize.core.utils.pathfinder.path_finder import PathFinder @@ -171,9 +168,9 @@ def get_parameters(self, project: ProjectSummary, node_id: str) -> list[Paramete Returns: A list of Parameter objects for the node """ - parameters_path = PathFinder.get_node_parameters_path(project, node_id) - data = DataLoader.load_json(parameters_path) - return parse_parameters_from_json(data) + node_path = PathFinder.get_node_path(project, node_id) + parameters_model = nodeParameters_simulation.from_file(node_path) + return parameters_model.parameters def upsert_parameter(self, project: ProjectSummary, node_id: str, parameter: Parameter) -> Parameter: """ @@ -187,12 +184,11 @@ def upsert_parameter(self, project: ProjectSummary, node_id: str, parameter: Par Returns: The upserted parameter """ - parameters_path = PathFinder.get_node_parameters_path(project, node_id) - data = DataLoader.load_json(parameters_path) - params = parse_parameters_from_json(data) + node_path = PathFinder.get_node_path(project, node_id) + parameters_model = nodeParameters_simulation.from_file(node_path) # Check if parameter with same name exists - for p in params: + for p in parameters_model.parameters: if p.name == parameter.name: # Update the existing parameter with new values p.value = parameter.value @@ -211,13 +207,10 @@ def upsert_parameter(self, project: ProjectSummary, node_id: str, parameter: Par break else: # Parameter doesn't exist, add it - params.append(parameter) + parameters_model.parameters.append(parameter) - # Write updated parameters back - DataWriter.write_json( - filepath=parameters_path, - data={"parameters": [p.model_dump() for p in params]}, - ) + # Save updated parameters back + parameters_model.save() return parameter def set_parameters(self, project: ProjectSummary, node_id: str, parameters: list[Parameter]) -> list[Parameter]: @@ -232,10 +225,10 @@ def set_parameters(self, project: ProjectSummary, node_id: str, parameters: list Returns: The list of parameters that were set """ - parameters_path = PathFinder.get_node_parameters_path(project, node_id) - data = {"parameters": [p.model_dump() for p in parameters]} - - DataWriter.write_json(filepath=parameters_path, data=data) + node_path = PathFinder.get_node_path(project, node_id) + parameters_model = nodeParameters_simulation.from_file(node_path) + parameters_model.parameters = parameters + parameters_model.save() return parameters def show_parameters(self, project: ProjectSummary, node_id: str) -> str: diff --git a/fluidize/backends/local/graph.py b/fluidize/backends/local/graph.py deleted file mode 100644 index 196b1cc..0000000 --- a/fluidize/backends/local/graph.py +++ /dev/null @@ -1,156 +0,0 @@ -""" -Local filesystem-based graph backend interface. - -This module provides the local backend interface for graph operations, -wrapping the core GraphProcessor with backend-specific functionality. -""" - -from typing import Optional - -from fluidize.core.modules.graph.processor import GraphProcessor -from fluidize.core.types.graph import GraphData, GraphEdge, GraphNode -from fluidize.core.types.node import nodeMetadata_simulation, nodeProperties_simulation -from fluidize.core.types.project import ProjectSummary - - -class GraphHandler: - """ - Local filesystem-based graph processor backend. - - This class provides a clean interface for graph operations using the local backend, - wrapping the core GraphProcessor functionality. - """ - - def __init__(self) -> None: - """Initialize the local graph processor.""" - pass - - def get_graph(self, project: ProjectSummary) -> GraphData: - """ - Get the complete graph for a project. - - Args: - project: The project to get the graph for - - Returns: - GraphData containing all nodes and edges - """ - processor = GraphProcessor(project) - return processor.get_graph() - - def insert_node(self, project: ProjectSummary, node: GraphNode, sim_global: bool = True) -> GraphNode: - """ - Insert a new node into the project graph. - - Args: - project: The project to add the node to - node: The node to insert - sim_global: Whether to use global simulations (placeholder for future) - - Returns: - The inserted node - """ - processor = GraphProcessor(project) - return processor.insert_node(node, sim_global) - - def insert_node_from_scratch( - self, - project: ProjectSummary, - GraphNode: GraphNode, - nodeProperties: nodeProperties_simulation, - nodeMetadata: nodeMetadata_simulation, - repo_link: Optional[str] = None, - ) -> GraphNode: - """ - Insert a new node into the project graph from scratch. - - Args: - project: The project to add the node to - nodeProperties: The properties of the node to insert - sim_global: Whether to use global simulations (placeholder for future) - - Returns: - The inserted node - """ - processor = GraphProcessor(project) - return processor.insert_node_from_scratch(GraphNode, nodeProperties, nodeMetadata, repo_link) - - def update_node_position(self, project: ProjectSummary, node: GraphNode) -> GraphNode: - """ - Update a node's position in the graph. - - Args: - project: The project containing the node - node: The node with updated position - - Returns: - The updated node - """ - processor = GraphProcessor(project) - return processor.update_node_position(node) - - def delete_node(self, project: ProjectSummary, node_id: str) -> None: - """ - Delete a node from the project graph. - - Args: - project: The project containing the node - node_id: ID of the node to delete - """ - processor = GraphProcessor(project) - processor.delete_node(node_id) - - def upsert_edge(self, project: ProjectSummary, edge: GraphEdge) -> GraphEdge: - """ - Add or update an edge in the project graph. - - Args: - project: The project containing the graph - edge: The edge to upsewrt - - Returns: - The upserted edge - """ - processor = GraphProcessor(project) - return processor.upsert_edge(edge) - - def delete_edge(self, project: ProjectSummary, edge_id: str) -> None: - """ - Delete an edge from the project graph. - - Args: - project: The project containing the edge - edge_id: ID of the edge to delete - """ - processor = GraphProcessor(project) - processor.delete_edge(edge_id) - - def ensure_graph_initialized(self, project: ProjectSummary) -> None: - """ - Ensure the project has a graph.json file initialized. - - Args: - project: The project to initialize the graph for - """ - processor = GraphProcessor(project) - processor._ensure_graph_file_exists() - - def show_graph_ascii(self, project: ProjectSummary) -> str: - """ - Get ASCII representation of the project graph. - - Args: - project: The project to visualize - - Returns: - ASCII string representation of the graph - """ - processor = GraphProcessor(project) - graph_data = processor.get_graph() - - # Create Graph model from the data to use ASCII visualization - from fluidize.core.modules.graph.model import Graph - - graph = Graph(nodes=graph_data.nodes, edges=graph_data.edges) - - return graph.to_ascii() diff --git a/fluidize/client.py b/fluidize/client.py index edc4b7b..3d81fd1 100644 --- a/fluidize/client.py +++ b/fluidize/client.py @@ -15,7 +15,7 @@ from .adapters.local import LocalAdapter from .config import FluidizeConfig -from .managers.projects import Projects +from .managers.registry import RegistryManager class FluidizeClient: @@ -34,8 +34,6 @@ class FluidizeClient: def __init__(self, mode: Literal["local", "api", "auto"] = "auto", base_path: Optional[Path] = None): """ - Initialize the Fluidize client. - Args: mode: Operation mode - "local", "api", or "auto" for environment detection base_path: Optional custom base path for local mode. If None, uses ~/.fluidize @@ -52,17 +50,25 @@ def __init__(self, mode: Literal["local", "api", "auto"] = "auto", base_path: Op self._adapter = self._initialize_adapter() # Initialize resource managers - self.projects = Projects(self._adapter) + self.projects = RegistryManager(self._adapter) def _initialize_adapter(self) -> Any: - """Initialize the appropriate adapter based on the mode.""" + """Initialize the appropriate adapter based on the mode. + + Returns: + Any: The initialized adapter + """ if self.config.is_api_mode(): return self._initialize_api_adapter() else: return self._initialize_local_adapter() def _initialize_api_adapter(self) -> FluidizeSDK: - """Initialize the API adapter using FluidizeSDK.""" + """Initialize the API adapter using FluidizeSDK. + + Returns: + FluidizeSDK: The initialized API adapter + """ if not self.config.api_key: msg = "API mode requires an API key. Set the FLUIDIZE_API_KEY environment variable." raise ValueError(msg) @@ -72,18 +78,35 @@ def _initialize_api_adapter(self) -> FluidizeSDK: ) def _initialize_local_adapter(self) -> LocalAdapter: - """Initialize the local adapter.""" + """Initialize the local adapter. + + Returns: + LocalAdapter: The initialized local adapter + """ return LocalAdapter(self.config) @property def mode(self) -> str: - """Get the current operation mode.""" + """Get the current operation mode. + + Returns: + str: The current operation mode + """ return self.config.mode @property def adapter(self) -> Any: - """Access the underlying adapter for advanced operations.""" + """Access the underlying adapter for advanced operations. + + Returns: + Any: The underlying adapter + """ return self._adapter def __repr__(self) -> str: + """Return a string representation of the client. + + Returns: + str: A string representation of the client + """ return f"FluidizeClient(mode='{self.mode}')" diff --git a/fluidize/config.py b/fluidize/config.py index 68e2f43..680be97 100644 --- a/fluidize/config.py +++ b/fluidize/config.py @@ -91,7 +91,11 @@ def check_docker_available(self) -> bool: return result.returncode == 0 def warn_if_docker_unavailable(self) -> None: - """Issue a warning if Docker is not available for local runs.""" + """Issue a warning if Docker is not available for local runs. + + Returns: + None + """ if not self.check_docker_available(): warnings.warn( "Docker is not available. Local simulation runs will not be possible. " diff --git a/fluidize/core/modules/execute/execution_manager.py b/fluidize/core/modules/execute/execution_manager.py index bd3a4f6..465503e 100644 --- a/fluidize/core/modules/execute/execution_manager.py +++ b/fluidize/core/modules/execute/execution_manager.py @@ -11,6 +11,7 @@ from fluidize.core.types.execution_models import ExecutionMode, create_execution_context from fluidize.core.types.node import nodeProperties_simulation from fluidize.core.types.project import ProjectSummary +from fluidize.core.utils.logger.execution_logger import ExecutionLogger from .docker_client import DockerExecutionClient @@ -44,6 +45,7 @@ def execute_node( execution_mode: ExecutionMode = ExecutionMode.LOCAL_DOCKER, run_number: Optional[int] = None, run_id: Optional[str] = None, + run_metadata: Optional[object] = None, **kwargs: Any, ) -> dict[str, Any]: """ @@ -93,7 +95,7 @@ def execute_node( logger.warning(f"Specification warning: {warning}") # Step 4: Execute based on mode - result = self._execute_with_mode(context.execution_mode, spec, **kwargs) + result = self._execute_with_mode(context.execution_mode, spec, project, node, run_metadata, **kwargs) logger.info(f"Node {node.node_id} execution completed: {result.get('success', False)}") except Exception as e: @@ -102,14 +104,22 @@ def execute_node( else: return result - def _execute_with_mode(self, execution_mode: ExecutionMode, spec: Any, **kwargs: Any) -> dict[str, Any]: + def _execute_with_mode( + self, + execution_mode: ExecutionMode, + spec: Any, + project: ProjectSummary, + node: nodeProperties_simulation, + run_metadata: Optional[object], + **kwargs: Any, + ) -> dict[str, Any]: """Execute using the appropriate client based on execution mode.""" if execution_mode == ExecutionMode.LOCAL_DOCKER: - return self._execute_docker(spec, **kwargs) + return self._execute_docker(spec, project, node, run_metadata, **kwargs) elif execution_mode == ExecutionMode.VM_DOCKER: - return self._execute_vm(spec, **kwargs) + return self._execute_vm(spec, project, node, run_metadata, **kwargs) elif execution_mode == ExecutionMode.KUBERNETES: # Kubernetes execution not implemented yet @@ -118,12 +128,19 @@ def _execute_with_mode(self, execution_mode: ExecutionMode, spec: Any, **kwargs: elif execution_mode == ExecutionMode.CLOUD_BATCH: # Could integrate with existing batch execution logger.warning("Cloud Batch execution not yet implemented, falling back to VM") - return self._execute_vm(spec, **kwargs) + return self._execute_vm(spec, project, node, run_metadata, **kwargs) else: return {"success": False, "error": f"Unsupported execution mode: {execution_mode.value}"} - def _execute_docker(self, spec: Any, **kwargs: Any) -> dict[str, Any]: + def _execute_docker( + self, + spec: Any, + project: ProjectSummary, + node: nodeProperties_simulation, + run_metadata: Optional[object], + **kwargs: Any, + ) -> dict[str, Any]: """Execute using Docker client.""" try: if not self.docker_client: @@ -135,6 +152,9 @@ def _execute_docker(self, spec: Any, **kwargs: Any) -> dict[str, Any]: # Execute container result = self.docker_client.run_container(spec.container_spec, spec.volume_spec.volumes, **kwargs) + + # Save execution logs + ExecutionLogger.save_execution_logs(project, run_metadata, str(node.node_id), result.stdout, result.stderr) except Exception as e: logger.exception("Docker execution failed") return {"success": False, "error": str(e), "execution_mode": "local_docker"} @@ -148,7 +168,14 @@ def _execute_docker(self, spec: Any, **kwargs: Any) -> dict[str, Any]: "execution_mode": "local_docker", } - def _execute_vm(self, spec: Any, **kwargs: Any) -> dict[str, Any]: + def _execute_vm( + self, + spec: Any, + project: ProjectSummary, + node: nodeProperties_simulation, + run_metadata: Optional[object], + **kwargs: Any, + ) -> dict[str, Any]: """Execute using VM client.""" try: if not self.vm_client: @@ -156,6 +183,9 @@ def _execute_vm(self, spec: Any, **kwargs: Any) -> dict[str, Any]: # Execute container on VM result = self.vm_client.run_container(spec.container_spec, spec.volume_spec.volumes, **kwargs) + + # Save execution logs + ExecutionLogger.save_execution_logs(project, run_metadata, str(node.node_id), result.stdout, result.stderr) except Exception as e: logger.exception("VM execution failed") return {"success": False, "error": str(e), "execution_mode": "vm_docker"} diff --git a/fluidize/core/modules/run/node/methods/local/ExecuteNew.py b/fluidize/core/modules/run/node/methods/local/ExecuteNew.py index 5f4b1a8..2fd01f7 100644 --- a/fluidize/core/modules/run/node/methods/local/ExecuteNew.py +++ b/fluidize/core/modules/run/node/methods/local/ExecuteNew.py @@ -15,8 +15,7 @@ from fluidize.core.types.node import nodeProperties_simulation from fluidize.core.types.project import ProjectSummary from fluidize.core.types.runs import ContainerPaths, NodePaths -from fluidize.core.utils.dataloader.data_writer import DataWriter -from fluidize.core.utils.pathfinder.path_finder import PathFinder +from fluidize.core.utils.logger.execution_logger import ExecutionLogger logger = logging.getLogger(__name__) @@ -59,36 +58,6 @@ def print_job_info(self) -> None: if self.prev_node: logger.info(f"Previous node: {self.prev_node.node_id}") - def _save_execution_logs(self, stdout: str, stderr: str) -> None: - """Save Docker execution logs using PathFinder methods.""" - if not self.run_metadata or not hasattr(self.run_metadata, "run_number"): - logger.warning("No run metadata available, skipping log file saving") - return - - try: - # Create nodes log directory - nodes_log_dir = PathFinder.get_logs_path(self.project, self.run_metadata.run_number) / "nodes" - DataWriter.create_directory(nodes_log_dir) - - # Save stdout - if stdout: - stdout_path = PathFinder.get_log_path( - self.project, self.run_metadata.run_number, str(self.node.node_id), "stdout" - ) - DataWriter.write_text(stdout_path, stdout) - logger.info(f"Saved stdout log to: {stdout_path}") - - # Save stderr - if stderr: - stderr_path = PathFinder.get_log_path( - self.project, self.run_metadata.run_number, str(self.node.node_id), "stderr" - ) - DataWriter.write_text(stderr_path, stderr) - logger.info(f"Saved stderr log to: {stderr_path}") - - except Exception as e: - logger.warning(f"Failed to save execution logs: {e}") - def _execute_node(self) -> str: """ Main execution method using new universal utilities. @@ -145,7 +114,9 @@ def _execute_node(self) -> str: ) # Step 6: Save execution logs - self._save_execution_logs(result.stdout, result.stderr) + ExecutionLogger.save_execution_logs( + self.project, self.run_metadata, str(self.node.node_id), result.stdout, result.stderr + ) # Step 7: Handle results if result.success: diff --git a/fluidize/core/modules/run/node/node_runner.py b/fluidize/core/modules/run/node/node_runner.py index a35668e..2684371 100644 --- a/fluidize/core/modules/run/node/node_runner.py +++ b/fluidize/core/modules/run/node/node_runner.py @@ -8,6 +8,10 @@ # RunJob now uses a strategy instance to dynamically choose behavior. class RunJob: + """ + A job that runs for a single node. + """ + def __init__( self, project: ProjectSummary, @@ -18,6 +22,16 @@ def __init__( run_id: Optional[str] = None, run_metadata: Optional[object] = None, # Add run metadata ): + """ + Args: + project: The project this node belongs to + strategyClass: The strategy class to use for execution + nodeProperties_simulation: The node properties to run + prev_nodeProperties_simulation: The previous node properties (optional) + mlflow_tracker: The MLflow tracker (optional) + run_id: The run ID (optional) + run_metadata: The run metadata (optional) + """ self.project = project self.nodeProperties_simulation = nodeProperties_simulation self.prev_nodeProperties_simulation = prev_nodeProperties_simulation diff --git a/fluidize/core/modules/run/project/project_runner.py b/fluidize/core/modules/run/project/project_runner.py index b0164e1..7f9c76a 100644 --- a/fluidize/core/modules/run/project/project_runner.py +++ b/fluidize/core/modules/run/project/project_runner.py @@ -15,29 +15,50 @@ class ProjectRunner: """ def __init__(self, project: ProjectSummary): - """Initialize with project and get handler""" + """ + Args: + project: ProjectSummary + """ self.project = project self.handler = get_handler("project_runner", project) def prepare_run_environment(self, metadata: RunFlowPayload) -> int: """ - Create a new run folder for the project - Returns the run number + Create a new run folder for the project. + + Args: + metadata: RunFlowPayload + + Returns: + int: Run number """ return cast(int, self.handler.prepare_run_environment(metadata)) async def execute_node(self, node_id: str, prev_node_id: Optional[str] = None, **kwargs: Any) -> dict[str, Any]: """ - Execute a single node within the project run - Returns the execution result + Execute a single node within the project run. + + Args: + node_id: Node ID + prev_node_id: Previous node ID + **kwargs: Additional keyword arguments + + Returns: + dict[str, Any]: Execution result """ return await asyncio.to_thread(self.handler.execute_node, node_id, prev_node_id=prev_node_id, **kwargs) async def execute_flow(self, nodes_to_run: list[str], prev_nodes: list[str], **kwargs: Any) -> list[dict[str, Any]]: """ - Execute a flow of nodes in the correct order - nodes_to_run: List of node IDs - Returns execution results for all nodes + Execute a flow of nodes in order. + + Args: + nodes_to_run: List of node IDs + prev_nodes: List of previous node IDs + **kwargs: Additional keyword arguments + + Returns: + list[dict[str, Any]]: Execution results for all nodes """ # Make sure that nodes_to_run and prev_nodes are same size lists if len(nodes_to_run) != len(prev_nodes): diff --git a/fluidize/core/types/file_models/file_model_base.py b/fluidize/core/types/file_models/file_model_base.py index 3dc25f1..f037ad7 100644 --- a/fluidize/core/types/file_models/file_model_base.py +++ b/fluidize/core/types/file_models/file_model_base.py @@ -1,6 +1,6 @@ from __future__ import annotations -from typing import Any, TypeVar, Union +from typing import Any, Optional, TypeVar, Union from pydantic import BaseModel, ConfigDict, PrivateAttr, ValidationError from upath import UPath @@ -49,7 +49,7 @@ def from_file(cls: type[T], directory: Union[str, UPath]) -> T: return instance @classmethod - def from_dict_and_path(cls: type[T], data: dict, path: UPath) -> T: + def from_dict_and_path(cls: type[T], data: Any, path: Optional[UPath]) -> T: """Creates a model instance from a dictionary and a path, without reading the file again.""" if not data: raise ValueError() diff --git a/fluidize/core/types/file_models/json_file_model_base.py b/fluidize/core/types/file_models/json_file_model_base.py new file mode 100644 index 0000000..b0e8ff2 --- /dev/null +++ b/fluidize/core/types/file_models/json_file_model_base.py @@ -0,0 +1,106 @@ +from __future__ import annotations + +from typing import Any, TypeVar, Union + +from pydantic import BaseModel, ConfigDict, PrivateAttr, ValidationError +from upath import UPath + +T = TypeVar("T", bound="JSONFileModelBase") + + +class JSONFileModelBase(BaseModel): + _filepath: Union[UPath, None] = PrivateAttr(default=None) + + @property + def filepath(self) -> UPath: + """Return the exact path to the model file. Raises if not set.""" + if not self._filepath: + raise ValueError() + return self._filepath + + @property + def directory(self) -> UPath: + """Return the folder containing the model file. Raises if filepath not set.""" + fp = self.filepath + return fp.parent + + @classmethod + def from_file(cls: type[T], directory: Union[str, UPath]) -> T: + from fluidize.core.utils.dataloader.data_loader import DataLoader + + filename = getattr(cls, "_filename", None) + if not filename: + raise TypeError() + + path = UPath(directory) / filename + data = DataLoader.load_json(path) + + if not data: + raise FileNotFoundError() + + try: + instance = cls.model_validate(data) + except ValidationError: + raise + except Exception as e: + raise ValueError() from e + else: + instance._filepath = path + return instance + + @classmethod + def from_dict_and_path(cls: type[T], data: dict, path: UPath) -> T: + """Creates a model instance from a dictionary and a path, without reading the file again.""" + if not data: + raise ValueError() + + try: + instance = cls.model_validate(data) + except ValidationError: + raise + except Exception as e: + raise ValueError() from e + else: + instance._filepath = path + return instance + + def model_dump_wrapped(self) -> dict[str, Any]: + config = getattr(self, "Key", None) + key = getattr(config, "key", None) + + if not key: + return self.model_dump() + + return {key: self.model_dump(mode="json")} + + def save(self, directory: UPath | None = None) -> None: + from fluidize.core.utils.dataloader.data_loader import DataLoader + from fluidize.core.utils.dataloader.data_writer import DataWriter + + if directory: + filename = getattr(self.__class__, "_filename", None) + if not filename: + raise TypeError() + self._filepath = UPath(directory) / filename + + if not self._filepath: + raise ValueError() + + # Load existing data to preserve other keys, if the file already exists. + # Pass a new UPath object to avoid issues with object caching if it's the same file. + existing_data = DataLoader.load_json(UPath(self._filepath)) + + new_data = self.model_dump_wrapped() + existing_data.update(new_data) + + DataWriter.write_json(self._filepath, existing_data) + + def edit(self, **kwargs: Any) -> None: + for key, value in kwargs.items(): + if hasattr(self, key): + setattr(self, key, value) + else: + raise AttributeError() + self.save() + + model_config = ConfigDict(arbitrary_types_allowed=True) diff --git a/fluidize/core/types/file_models/parameters_model.py b/fluidize/core/types/file_models/parameters_model.py new file mode 100644 index 0000000..2c2abd4 --- /dev/null +++ b/fluidize/core/types/file_models/parameters_model.py @@ -0,0 +1,56 @@ +from typing import Any, ClassVar + +from pydantic import Field, model_validator + +from fluidize.core.constants import FileConstants +from fluidize.core.types.parameters import Parameter + +from .json_file_model_base import JSONFileModelBase + + +class ParametersModel(JSONFileModelBase): + _filename: ClassVar[str] = FileConstants.PARAMETERS_SUFFIX + """ + A base model for parameters objects stored in JSON structure. + + This model provides two main functionalities: + 1. A validator to automatically unpack nested data based on a 'key' + from the subclass's Config. + 2. A method to wrap the model's data back into the nested structure + for serialization. + """ + + parameters: list[Parameter] = Field(default_factory=list) + + @model_validator(mode="before") + @classmethod + def _unpack_and_validate(cls, data: Any) -> Any: + """ + Unpacks and validates the data against the key + specified in the subclass's Config. + """ + if not isinstance(data, dict): + return data + + config = getattr(cls, "Key", None) + key = getattr(config, "key", None) + + # If there's no key in the config or the key is not in the data, + # assume the data is already in the correct, unpacked structure. + if not key or key not in data: + return data + + unpacked_data = data[key] + if not isinstance(unpacked_data, list): + # If parameters is not a list, treat it as empty + unpacked_data = [] + + # Return data in the format expected by the model + return {"parameters": unpacked_data} + + def model_dump_wrapped(self) -> dict[str, Any]: + """Override to avoid double wrapping of parameters key.""" + return {"parameters": [p.model_dump() for p in self.parameters]} + + class Key: + key = "parameters" diff --git a/fluidize/core/types/graph.py b/fluidize/core/types/graph.py index 46b337c..97fdb20 100644 --- a/fluidize/core/types/graph.py +++ b/fluidize/core/types/graph.py @@ -10,31 +10,60 @@ class Position(BaseModel): - x: float - y: float + """Position of a node in layout space.""" + + x: float #: X coordinate in layout space. + y: float #: Y coordinate in layout space. class graphNodeData(BaseModel): - label: str - simulation_id: Optional[str] = None + """Extra metadata for a node.""" + + label: str #: Node label. + simulation_id: Optional[str] = None #: Simulation ID. -# Default Node Type in Graph +# Default Node Type in GraphGraph class GraphNode(BaseModel): - id: str - position: Position - data: graphNodeData - type: str + """A node in the graph. + + Attributes: + id: Unique node ID. + position: Node position. + data: Extra metadata. + type: Renderer/type key. + """ + + id: str #: Node ID. + position: Position #: Node position. + data: graphNodeData #: Node data. + type: str #: Node type. # Edge Type in Graph class GraphEdge(BaseModel): - id: str - source: str - target: str - type: str + """An edge in the graph. + + Attributes: + id: Unique edge ID. + source: Source node ID. + target: Target node ID. + type: Renderer/type key. + """ + + id: str #: Edge ID. + source: str #: Source node ID. + target: str #: Target node ID. + type: str #: Edge type. class GraphData(BaseModel): - nodes: list[GraphNode] - edges: list[GraphEdge] + """A graph representation of a project in the `graph.json` file. + + Attributes: + nodes: List of nodes. + edges: List of edges. + """ + + nodes: list[GraphNode] #: List of nodes. + edges: list[GraphEdge] #: List of edges. diff --git a/fluidize/core/types/node.py b/fluidize/core/types/node.py index 9e49ce4..15a2d23 100644 --- a/fluidize/core/types/node.py +++ b/fluidize/core/types/node.py @@ -11,6 +11,7 @@ from pydantic import BaseModel, ConfigDict, computed_field from .file_models.metadata_model import MetadataModel +from .file_models.parameters_model import ParametersModel from .file_models.properties_model import PropertiesModel from .runs import RunStatus @@ -95,3 +96,15 @@ class nodeMetadata_simulation(MetadataModel): class Key: key = "simulation" metadata_version = "1.0" + + +class nodeParameters_simulation(ParametersModel): + """ + Parameters configuration for a simulation node. + + Handles loading and saving of parameters.json files with the structure: + {"parameters": [list of Parameter objects]} + """ + + class Key: + key = "parameters" diff --git a/fluidize/core/utils/dataloader/json.py b/fluidize/core/utils/dataloader/json.py deleted file mode 100644 index 02dbf0d..0000000 --- a/fluidize/core/utils/dataloader/json.py +++ /dev/null @@ -1,13 +0,0 @@ -import json -from typing import Any - - -def read_json(path: str) -> dict[str, Any]: - with open(path) as f: - data = json.load(f) - return dict(data) - - -def write_local_json(path: str, data: dict[str, Any]) -> None: - with open(path, "w") as f: - json.dump(data, f, indent=2) diff --git a/fluidize/core/utils/logger/execution_logger.py b/fluidize/core/utils/logger/execution_logger.py new file mode 100644 index 0000000..3b3bfb6 --- /dev/null +++ b/fluidize/core/utils/logger/execution_logger.py @@ -0,0 +1,152 @@ +""" +Execution Logger + +Utility for saving execution logs (stdout/stderr) from various execution modes. +This module provides a centralized way to persist execution logs to files +using the PathFinder and DataWriter utilities. +""" + +import logging +from typing import Optional + +from fluidize.core.types.project import ProjectSummary +from fluidize.core.utils.dataloader.data_writer import DataWriter +from fluidize.core.utils.pathfinder.path_finder import PathFinder + +logger = logging.getLogger(__name__) + + +class ExecutionLogger: + """ + Centralized execution log management for all execution modes. + + This class provides static methods for saving execution logs + from Docker, VM, and other execution environments. + """ + + @classmethod + def save_execution_logs( + cls, + project: ProjectSummary, + run_metadata: Optional[object], + node_id: str, + stdout: str, + stderr: str, + ) -> bool: + """ + Save both stdout and stderr logs for a node execution. + + Args: + project: Project information + run_metadata: Run metadata containing run_number + node_id: Node identifier + stdout: Standard output content + stderr: Standard error content + + Returns: + True if logs were saved successfully, False otherwise + """ + if not cls._validate_run_metadata(run_metadata): + logger.warning("No valid run metadata available, skipping log file saving") + return False + + try: + # Create nodes log directory + run_number = run_metadata.run_number # type: ignore[union-attr] + nodes_log_dir = PathFinder.get_logs_path(project, run_number) / "nodes" + DataWriter.create_directory(nodes_log_dir) + + # Save both stdout and stderr + stdout_saved = cls.save_stdout(project, run_metadata, node_id, stdout) + stderr_saved = cls.save_stderr(project, run_metadata, node_id, stderr) + + return stdout_saved or stderr_saved # Success if at least one was saved + + except Exception as e: + logger.warning(f"Failed to save execution logs for node {node_id}: {e}") + return False + + @classmethod + def save_stdout( + cls, + project: ProjectSummary, + run_metadata: Optional[object], + node_id: str, + stdout: str, + ) -> bool: + """ + Save stdout log for a node execution. + + Args: + project: Project information + run_metadata: Run metadata containing run_number + node_id: Node identifier + stdout: Standard output content + + Returns: + True if stdout was saved successfully, False otherwise + """ + if not cls._validate_run_metadata(run_metadata) or not stdout: + return False + + try: + run_number = run_metadata.run_number # type: ignore[union-attr] + stdout_path = PathFinder.get_log_path(project, run_number, node_id, "stdout") + DataWriter.write_text(stdout_path, stdout) + logger.info(f"Saved stdout log to: {stdout_path}") + return True + + except Exception as e: + logger.warning(f"Failed to save stdout log for node {node_id}: {e}") + return False + + @classmethod + def save_stderr( + cls, + project: ProjectSummary, + run_metadata: Optional[object], + node_id: str, + stderr: str, + ) -> bool: + """ + Save stderr log for a node execution. + + Args: + project: Project information + run_metadata: Run metadata containing run_number + node_id: Node identifier + stderr: Standard error content + + Returns: + True if stderr was saved successfully, False otherwise + """ + if not cls._validate_run_metadata(run_metadata) or not stderr: + return False + + try: + run_number = run_metadata.run_number # type: ignore[union-attr] + stderr_path = PathFinder.get_log_path(project, run_number, node_id, "stderr") + DataWriter.write_text(stderr_path, stderr) + logger.info(f"Saved stderr log to: {stderr_path}") + return True + + except Exception as e: + logger.warning(f"Failed to save stderr log for node {node_id}: {e}") + return False + + @classmethod + def _validate_run_metadata(cls, run_metadata: Optional[object]) -> bool: + """ + Validate that run metadata has the required run_number attribute. + + Args: + run_metadata: Run metadata object to validate + + Returns: + True if run_metadata is valid, False otherwise + """ + return ( + run_metadata is not None + and hasattr(run_metadata, "run_number") + and run_metadata.run_number is not None + ) diff --git a/fluidize/managers/__init__.py b/fluidize/managers/__init__.py index 58ff576..1215b5c 100644 --- a/fluidize/managers/__init__.py +++ b/fluidize/managers/__init__.py @@ -10,11 +10,11 @@ The managers module implements a two-tier pattern: 1. **Global Managers** - Handle cross-project operations - - `Projects`: Creates, retrieves, updates, and lists projects + - `RegistryManager`: Creates, retrieves, updates, and lists projects 2. **Project-Scoped Managers** - Bound to specific projects - - `ProjectGraph`: Manages nodes and edges within a project's computational graph - - `ProjectRuns`: Executes and monitors workflow runs for a project + - `GraphManager`: Manages nodes and edges within a project's computational graph + - `RunsManager`: Executes and monitors workflow runs for a project Design Pattern: The module uses a wrapper pattern where global managers return entity @@ -22,8 +22,8 @@ client.projects (Projects) └── .create() / .get() → Project entity - ├── .graph (ProjectGraph) - Computational graph operations - └── .runs (ProjectRuns) - Workflow execution operations + ├── .graph (GraphManager) - Computational graph operations + └── .runs (RunsManager) - Workflow execution operations Usage Examples: Basic project workflow:: @@ -76,8 +76,8 @@ File Structure: - `projects.py`: Global project CRUD operations (Projects class) - `project_manager.py`: Single project entity with sub-managers (Project class) - - `project_graph.py`: Project-scoped graph operations (ProjectGraph class) - - `project_runs.py`: Project-scoped run operations (ProjectRuns class) + - `graph.py`: Project-scoped graph operations (GraphManager class) + - `runs.py`: Project-scoped run operations (RunsManager class) Threading and adapter Support: All managers are thread-safe and support both local filesystem and @@ -85,8 +85,8 @@ of adapter is transparent to the manager classes. See Also: - - :class:`~fluidize.managers.projects.Projects`: Global project manager - - :class:`~fluidize.managers.project_manager.Project`: Project entity wrapper - - :class:`~fluidize.managers.project_graph.ProjectGraph`: Graph operations - - :class:`~fluidize.managers.project_runs.ProjectRuns`: Run operations + - :class:`~fluidize.managers.registry.RegistryManager`: Global project manager + - :class:`~fluidize.managers.project.ProjectManager`: Project entity wrapper + - :class:`~fluidize.managers.graph.GraphManager`: Graph operations + - :class:`~fluidize.managers.runs.RunsManager`: Run operations """ diff --git a/fluidize/managers/project_graph.py b/fluidize/managers/graph.py similarity index 85% rename from fluidize/managers/project_graph.py rename to fluidize/managers/graph.py index 18be2c7..3842d94 100644 --- a/fluidize/managers/project_graph.py +++ b/fluidize/managers/graph.py @@ -2,15 +2,18 @@ Project-scoped graph manager for user-friendly graph operations. """ -from typing import Any, Optional +from typing import TYPE_CHECKING, Any, Optional from fluidize.core.types.graph import GraphData, GraphEdge, GraphNode + +if TYPE_CHECKING: + from .node import NodeManager from fluidize.core.types.node import nodeMetadata_simulation, nodeProperties_simulation from fluidize.core.types.parameters import Parameter from fluidize.core.types.project import ProjectSummary -class ProjectGraph: +class GraphManager: """ Graph manager for a specific project. @@ -20,10 +23,8 @@ class ProjectGraph: def __init__(self, adapter: Any, project: ProjectSummary) -> None: """ - Initialize project-scoped graph manager. - Args: - adapter: adapter adapter (FluidizeSDK or Localadapter) + adapter: adapter (FluidizeSDK or LocalAdapter) project: The project this graph manager is bound to """ self.adapter = adapter @@ -42,7 +43,21 @@ def get(self) -> GraphData: """ return self.adapter.graph.get_graph(self.project) # type: ignore[no-any-return] - def add_node(self, node: GraphNode, sim_global: bool = True) -> GraphNode: + def get_node(self, node_id: str) -> "NodeManager": + """ + Get a NodeManager for a specific node in the project. + + Args: + node_id: ID of the node to get a manager for + + Returns: + NodeManager instance for the specified node + """ + from .node import NodeManager + + return NodeManager(self.adapter, self.project, node_id) + + def add_node(self, node: GraphNode, sim_global: bool = True) -> "NodeManager": """ Add a new node to this project's graph. @@ -51,9 +66,10 @@ def add_node(self, node: GraphNode, sim_global: bool = True) -> GraphNode: sim_global: Whether to use global simulations (placeholder for future) Returns: - The inserted node + The added node """ - return self.adapter.graph.insert_node(self.project, node, sim_global) # type: ignore[no-any-return] + inserted_node = self.adapter.graph.insert_node(self.project, node, sim_global) + return self.get_node(inserted_node.id) def add_node_from_scratch( self, @@ -61,7 +77,7 @@ def add_node_from_scratch( node_properties: nodeProperties_simulation, node_metadata: nodeMetadata_simulation, repo_link: Optional[str] = None, - ) -> GraphNode: + ) -> "NodeManager": """ Add a new node to this project's graph from scratch, creating all necessary files and directories. @@ -72,11 +88,12 @@ def add_node_from_scratch( repo_link: Optional repository URL to clone into the source directory Returns: - The inserted node + The added node """ - return self.adapter.graph.insert_node_from_scratch( # type: ignore[no-any-return] + inserted_node = self.adapter.graph.insert_node_from_scratch( self.project, node, node_properties, node_metadata, repo_link ) + return self.get_node(inserted_node.id) def update_node_position(self, node: GraphNode) -> GraphNode: """ diff --git a/fluidize/managers/node.py b/fluidize/managers/node.py new file mode 100644 index 0000000..41619d5 --- /dev/null +++ b/fluidize/managers/node.py @@ -0,0 +1,444 @@ +""" +Node-scoped manager for user-friendly node operations. +""" + +from typing import Any, Optional + +from upath import UPath + +from fluidize.core.constants import FileConstants +from fluidize.core.types.graph import GraphNode +from fluidize.core.types.node import nodeMetadata_simulation, nodeParameters_simulation, nodeProperties_simulation +from fluidize.core.types.parameters import Parameter +from fluidize.core.types.project import ProjectSummary +from fluidize.core.utils.pathfinder.path_finder import PathFinder + + +class NodeManager: + """ + Node manager for a specific node within a project. + + Provides node-specific operations like editing parameters, metadata, + and properties without requiring project and node context on each method call. + """ + + def __init__(self, adapter: Any, project: ProjectSummary, node_id: str) -> None: + """ + Args: + adapter: adapter adapter (FluidizeSDK or Localadapter) + project: The project this node belongs to + node_id: The ID of the node this manager is bound to + """ + self.adapter = adapter + self.project = project + self.node_id = node_id + + @property + def id(self) -> str: + """ + Get the node ID. + + Returns: + The ID of the node this manager is bound to + """ + return self.node_id + + @property + def data(self) -> Any: + """ + Get the node's data. + + Returns: + The data of the graph node + """ + return self.get_node().data + + def get_node(self) -> GraphNode: + """ + Get the complete graph node data. + + Returns: + GraphNode containing the node data + + Raises: + ValueError: If the node is not found in the project graph + """ + graph = self.adapter.graph.get_graph(self.project) + for node in graph.nodes: + if node.id == self.node_id: + return node # type: ignore[no-any-return] + msg = f"Node with ID '{self.node_id}' not found in project '{self.project.id}'" + raise ValueError(msg) + + def exists(self) -> bool: + """ + Check if this node exists in the project graph. + + Returns: + True if the node exists, False otherwise + """ + try: + self.get_node() + except ValueError: + return False + else: + return True + + def delete(self) -> None: + """ + Delete this node from the project graph and filesystem. + """ + self.adapter.graph.delete_node(self.project, self.node_id) + + def update_position(self, x: float, y: float) -> GraphNode: + """ + Update the node's position in the graph. + + Args: + x: New x coordinate + y: New y coordinate + + Returns: + The updated graph node + """ + node = self.get_node() + node.position.x = x + node.position.y = y + return self.adapter.graph.update_node_position(self.project, node) # type: ignore[no-any-return] + + def get_metadata(self) -> nodeMetadata_simulation: + """ + Get the node's metadata from metadata.yaml. + + Returns: + The node's metadata + + Raises: + FileNotFoundError: If metadata file doesn't exist + ValueError: If metadata file is invalid + """ + node_path = PathFinder.get_node_path(self.project, self.node_id) + return nodeMetadata_simulation.from_file(node_path) + + def update_metadata(self, **kwargs: Any) -> nodeMetadata_simulation: + """ + Update specific fields in the node's metadata. + + Args: + **kwargs: Fields to update (e.g., name="New Name", description="New desc") + + Returns: + The updated metadata + + Raises: + AttributeError: If trying to update a field that doesn't exist + """ + metadata = self.get_metadata() + metadata.edit(**kwargs) + return metadata + + def save_metadata(self, metadata: nodeMetadata_simulation) -> None: + """ + Save metadata object to the node's metadata.yaml file. + + Args: + metadata: The metadata object to save + """ + node_path = PathFinder.get_node_path(self.project, self.node_id) + metadata.save(node_path) + + def get_properties(self) -> nodeProperties_simulation: + """ + Get the node's properties from properties.yaml. + + Returns: + The node's properties + + Raises: + FileNotFoundError: If properties file doesn't exist + ValueError: If properties file is invalid + """ + node_path = PathFinder.get_node_path(self.project, self.node_id) + return nodeProperties_simulation.from_file(node_path) + + def update_properties(self, **kwargs: Any) -> nodeProperties_simulation: + """ + Update specific fields in the node's properties. + + Args: + **kwargs: Fields to update (e.g., container_image="new:tag", should_run=False) + + Returns: + The updated properties + + Raises: + AttributeError: If trying to update a field that doesn't exist + """ + properties = self.get_properties() + properties.edit(**kwargs) + return properties + + def save_properties(self, properties: nodeProperties_simulation) -> None: + """ + Save properties object to the node's properties.yaml file. + + Args: + properties: The properties object to save + """ + node_path = PathFinder.get_node_path(self.project, self.node_id) + properties.save(node_path) + + def get_parameters_model(self) -> nodeParameters_simulation: + """ + Get the node's parameters model from parameters.json. + + Returns: + The node's parameters model + + Raises: + FileNotFoundError: If parameters file doesn't exist + ValueError: If parameters file is invalid + """ + node_path = PathFinder.get_node_path(self.project, self.node_id) + return nodeParameters_simulation.from_file(node_path) + + def get_parameters(self) -> list[Parameter]: + """ + Get the node's parameters list from parameters.json. + + Returns: + List of Parameter objects for the node + """ + return self.get_parameters_model().parameters + + def get_parameter(self, name: str) -> Optional[Parameter]: + """ + Get a specific parameter by name. + + Args: + name: Name of the parameter to retrieve + + Returns: + The parameter if found, None otherwise + """ + parameters = self.get_parameters() + for param in parameters: + if param.name == name: + return param + return None + + def update_parameter(self, parameter: Parameter) -> Parameter: + """ + Update or add a parameter. + + Args: + parameter: The parameter to update/add + + Returns: + The updated parameter + """ + parameters_model = self.get_parameters_model() + + # Check if parameter with same name exists + for p in parameters_model.parameters: + if p.name == parameter.name: + # Update existing parameter + p.value = parameter.value + p.description = parameter.description + p.type = parameter.type + p.label = parameter.label + p.latex = parameter.latex + p.options = parameter.options + p.scope = parameter.scope + # Handle location extension + if parameter.location: + if p.location: + p.location.extend(parameter.location) + else: + p.location = parameter.location + break + else: + # Parameter doesn't exist, add it + parameters_model.parameters.append(parameter) + + parameters_model.save() + return parameter + + def set_parameters(self, parameters: list[Parameter]) -> list[Parameter]: + """ + Replace all parameters with the provided list. + + Args: + parameters: List of parameters to set + + Returns: + The list of parameters that were set + """ + parameters_model = self.get_parameters_model() + parameters_model.parameters = parameters + parameters_model.save() + return parameters + + def remove_parameter(self, name: str) -> bool: + """ + Remove a parameter by name. + + Args: + name: Name of the parameter to remove + + Returns: + True if parameter was removed, False if it didn't exist + """ + parameters_model = self.get_parameters_model() + original_count = len(parameters_model.parameters) + parameters_model.parameters = [p for p in parameters_model.parameters if p.name != name] + + if len(parameters_model.parameters) < original_count: + parameters_model.save() + return True + return False + + def show_parameters(self) -> str: + """ + Get a formatted string display of all parameters. + + Returns: + A formatted string displaying the parameters + """ + parameters = self.get_parameters() + + if not parameters: + return f"No parameters found for node '{self.node_id}'" + + output = f"Parameters for node '{self.node_id}':\n\n" + + for i, param in enumerate(parameters, 1): + output += f"Parameter {i}:\n" + output += f" Name: {param.name}\n" + output += f" Value: {param.value}\n" + output += f" Description: {param.description}\n" + output += f" Type: {param.type}\n" + output += f" Label: {param.label}\n" + if param.latex: + output += f" LaTeX: {param.latex}\n" + if param.location: + output += f" Location: {param.location}\n" + if param.options: + output += f" Options: {[opt.label for opt in param.options]}\n" + if param.scope: + output += f" Scope: {param.scope}\n" + output += "\n" + + return output + + def get_node_directory(self) -> UPath: + """ + Get the filesystem path to this node's directory. + + Returns: + Path to the node's directory + """ + return PathFinder.get_node_path(self.project, self.node_id) + + def get_metadata_path(self) -> UPath: + """ + Get the filesystem path to this node's metadata.yaml file. + + Returns: + Path to the metadata file + """ + return self.get_node_directory() / FileConstants.METADATA_SUFFIX + + def get_properties_path(self) -> UPath: + """ + Get the filesystem path to this node's properties.yaml file. + + Returns: + Path to the properties file + """ + return PathFinder.get_properties_path(self.project, self.node_id) + + def get_parameters_path(self) -> UPath: + """ + Get the filesystem path to this node's parameters.json file. + + Returns: + Path to the parameters file + """ + return PathFinder.get_node_parameters_path(self.project, self.node_id) + + def validate(self) -> dict[str, Any]: + """ + Validate the node's files and structure. + + Returns: + Dictionary containing validation results with keys: + - 'valid': bool indicating if node is valid + - 'graph_node_exists': bool + - 'metadata_exists': bool + - 'properties_exists': bool + - 'parameters_exists': bool + - 'errors': list of error messages + """ + result: dict[str, Any] = { + "valid": True, + "graph_node_exists": False, + "metadata_exists": False, + "properties_exists": False, + "parameters_exists": False, + "errors": [], + } + + # Check if node exists in graph + try: + self.get_node() + result["graph_node_exists"] = True + except ValueError as e: + result["errors"].append(str(e)) + + # Check metadata file + try: + self.get_metadata() + result["metadata_exists"] = True + except Exception as e: + result["errors"].append(f"Metadata error: {e}") + + # Check properties file + try: + self.get_properties() + result["properties_exists"] = True + except Exception as e: + result["errors"].append(f"Properties error: {e}") + + # Check parameters file + try: + self.get_parameters() + result["parameters_exists"] = True + except Exception as e: + result["errors"].append(f"Parameters error: {e}") + + result["valid"] = len(result["errors"]) == 0 + return result + + def to_dict(self) -> dict[str, Any]: + """ + Convert the complete node information to a dictionary. + + Returns: + Dictionary containing node graph data, metadata, properties, and parameters + """ + try: + return { + "graph_node": self.get_node().model_dump(), + "metadata": self.get_metadata().model_dump(), + "properties": self.get_properties().model_dump(), + "parameters": [p.model_dump() for p in self.get_parameters()], + "paths": { + "node_directory": str(self.get_node_directory()), + "metadata_file": str(self.get_metadata_path()), + "properties_file": str(self.get_properties_path()), + "parameters_file": str(self.get_parameters_path()), + }, + } + except Exception as e: + return {"error": str(e), "node_id": self.node_id, "project": self.project.id} diff --git a/fluidize/managers/project_manager.py b/fluidize/managers/project.py similarity index 64% rename from fluidize/managers/project_manager.py rename to fluidize/managers/project.py index 21ed9eb..ea27bc3 100644 --- a/fluidize/managers/project_manager.py +++ b/fluidize/managers/project.py @@ -6,44 +6,42 @@ from fluidize.core.types.project import ProjectSummary -from .project_graph import ProjectGraph -from .project_runs import ProjectRuns +from .graph import GraphManager +from .runs import RunsManager -class Project: +class ProjectManager: """ - Project entity that wraps project data and provides access to scoped managers. + Project manager that wraps project data and provides access to scoped managers. Provides convenient access to graph and runs operations for this specific project. """ def __init__(self, adapter: Any, project_summary: ProjectSummary) -> None: """ - Initialize project wrapper. - Args: - adapter: adapter adapter (FluidizeSDK or Localadapter) + adapter: adapter (FluidizeSDK or LocalAdapter) project_summary: The underlying project data """ self._adapter = adapter self._project_summary = project_summary - self._graph: Optional[ProjectGraph] = None - self._runs: Optional[ProjectRuns] = None + self._graph: Optional[GraphManager] = None + self._runs: Optional[RunsManager] = None @property - def graph(self) -> ProjectGraph: + def graph(self) -> GraphManager: """ Get the graph manager for this project. Returns: - ProjectGraph manager scoped to this project + GraphManager manager scoped to this project """ if self._graph is None: - self._graph = ProjectGraph(self._adapter, self._project_summary) + self._graph = GraphManager(self._adapter, self._project_summary) return self._graph @property - def runs(self) -> ProjectRuns: + def runs(self) -> RunsManager: """ Get the runs manager for this project. @@ -51,48 +49,80 @@ def runs(self) -> ProjectRuns: ProjectRuns manager scoped to this project """ if self._runs is None: - self._runs = ProjectRuns(self._adapter, self._project_summary) + self._runs = RunsManager(self._adapter, self._project_summary) return self._runs # Delegate all ProjectSummary attributes @property def id(self) -> str: - """Get project ID.""" + """Get project ID. + + Returns: + The project ID + """ return self._project_summary.id @property def label(self) -> Optional[str]: - """Get project label.""" + """Get project label. + + Returns: + The project label + """ return self._project_summary.label @property def description(self) -> Optional[str]: - """Get project description.""" + """Get project description. + + Returns: + The project description + """ return self._project_summary.description @property def location(self) -> Optional[str]: - """Get project location.""" + """Get project location. + + Returns: + The project location + """ return self._project_summary.location @property def status(self) -> Optional[str]: - """Get project status.""" + """Get project status. + + Returns: + The project status + """ return self._project_summary.status @property def metadata_version(self) -> str: - """Get project metadata version.""" + """Get project metadata version. + + Returns: + The project metadata version + """ return self._project_summary.metadata_version @property def created_at(self) -> Optional[str]: - """Get project creation timestamp.""" + """Get project creation timestamp. + + Returns: + The project creation timestamp + """ return getattr(self._project_summary, "created_at", None) @property def updated_at(self) -> Optional[str]: - """Get project update timestamp.""" + """Get project update timestamp. + + Returns: + The project update timestamp + """ return getattr(self._project_summary, "updated_at", None) def to_dict(self) -> dict[str, Any]: @@ -113,8 +143,5 @@ def to_dict(self) -> dict[str, Any]: "updated_at": self.updated_at, } - def __repr__(self) -> str: - return f"Project(id='{self.id}', label='{self.label}')" - def __str__(self) -> str: return f"Project {self.id}: {self.label or 'No label'}" diff --git a/fluidize/managers/projects.py b/fluidize/managers/registry.py similarity index 81% rename from fluidize/managers/projects.py rename to fluidize/managers/registry.py index db50238..c4b6094 100644 --- a/fluidize/managers/projects.py +++ b/fluidize/managers/registry.py @@ -1,11 +1,11 @@ from typing import Any, Optional -from .project_manager import Project +from .project import ProjectManager -class Projects: +class RegistryManager: """ - Manager for project CRUD operations. + Registry manager for project CRUD operations. Provides methods to create, retrieve, update, and list projects. All methods return Project entities that give access to project-scoped operations. @@ -13,10 +13,8 @@ class Projects: def __init__(self, adapter: Any) -> None: """ - Initialize the Projects manager. - Args: - adapter: adapter adapter (FluidizeSDK or Localadapter) + adapter: adapter (FluidizeSDK or LocalAdapter) """ self.adapter = adapter @@ -27,7 +25,7 @@ def create( description: str = "", location: str = "", status: str = "", - ) -> Project: + ) -> ProjectManager: """ Create a new project. @@ -48,9 +46,9 @@ def create( location=location, status=status, ) - return Project(self.adapter, project_summary) + return ProjectManager(self.adapter, project_summary) - def get(self, project_id: str) -> Project: + def get(self, project_id: str) -> ProjectManager: """ Get a project by ID. @@ -61,9 +59,9 @@ def get(self, project_id: str) -> Project: Project wrapped in Project class """ project_summary = self.adapter.projects.retrieve(project_id) - return Project(self.adapter, project_summary) + return ProjectManager(self.adapter, project_summary) - def list(self) -> list[Project]: + def list(self) -> list[ProjectManager]: """ List all projects. @@ -71,7 +69,7 @@ def list(self) -> list[Project]: List of projects wrapped in Project class """ project_summaries = self.adapter.projects.list() - return [Project(self.adapter, summary) for summary in project_summaries] + return [ProjectManager(self.adapter, summary) for summary in project_summaries] def update( self, @@ -80,7 +78,7 @@ def update( description: Optional[str] = None, location: Optional[str] = None, status: Optional[str] = None, - ) -> Project: + ) -> ProjectManager: """ Update an existing project. @@ -106,4 +104,4 @@ def update( update_data["status"] = status project_summary = self.adapter.projects.upsert(**update_data) - return Project(self.adapter, project_summary) + return ProjectManager(self.adapter, project_summary) diff --git a/fluidize/managers/project_runs.py b/fluidize/managers/runs.py similarity index 95% rename from fluidize/managers/project_runs.py rename to fluidize/managers/runs.py index 703da6c..4427f51 100644 --- a/fluidize/managers/project_runs.py +++ b/fluidize/managers/runs.py @@ -10,7 +10,7 @@ from fluidize.core.types.runs import RunFlowPayload -class ProjectRuns: +class RunsManager: """ Runs manager for a specific project. @@ -20,10 +20,8 @@ class ProjectRuns: def __init__(self, adapter: Any, project: ProjectSummary) -> None: """ - Initialize project-scoped runs manager. - Args: - adapter: adapter adapter (FluidizeSDK or Localadapter) + adapter: adapter (FluidizeSDK or LocalAdapter) project: The project this runs manager is bound to """ self.adapter = adapter diff --git a/fluidize/managers/simulations.py b/fluidize/managers/simulations.py new file mode 100644 index 0000000..e6f3ed5 --- /dev/null +++ b/fluidize/managers/simulations.py @@ -0,0 +1,33 @@ +from typing import Any + +from fluidize_sdk import FluidizeSDK + +from fluidize.core.types.node import nodeMetadata_simulation + + +class SimulationsManager: + """ + Simulations manager that provides access to the Fluidize simulation library. + """ + + def __init__(self, adapter: Any) -> None: + """ + Args: + adapter: adapter (FluidizeSDK or LocalAdapter) + """ + self._adapter = adapter + # TODO: Fix hardcoding of api_token and remove type ignore + self.fluidize_sdk = FluidizeSDK(api_token="placeholder") # noqa: S106 + + def list_simulations(self) -> list[Any]: + """ + List all simulations available in the Fluidize simulation library. + + Returns: + List of simulation metadata + """ + simulations = self.fluidize_sdk.simulation.list_simulations(sim_global=True) + return [ + nodeMetadata_simulation.from_dict_and_path(data=simulation.model_dump(), path=None) + for simulation in simulations + ] diff --git a/mkdocs.yml b/mkdocs.yml index 13624b1..b561f73 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -10,14 +10,15 @@ copyright: Maintained by Fluidize-Inc. nav: - Home: index.md - Getting Started: - - Fluidize Client: getting-started/client.md - - Projects and Nodes: getting-started/projects-nodes.md + - Quickstart: getting-started/quickstart.md - Examples: getting-started/examples.md - Core Modules: + - core-modules/index.md + - Client: core-modules/client.md - Projects: core-modules/projects.md - Graph: core-modules/graph.md + - Node: core-modules/node.md - Run: core-modules/run.md - - Execute: core-modules/execute.md plugins: - search - mkdocstrings: @@ -25,15 +26,23 @@ plugins: python: paths: ["fluidize"] options: + merge_init_into_class: true + show_signature: true + show_signature_annotations: true + members_order: source + docstring_style: google filters: - - "!^__init__$" + - "!^_" show_source: false - show_root_heading: true show_root_full_path: false + extra: + show_attributes: true + show_root_heading: true theme: name: material - feature: - tabs: true + features: + - tabs + - navigation.indexes palette: - media: "(prefers-color-scheme: light)" scheme: default diff --git a/pyproject.toml b/pyproject.toml index f6b43ce..f0844da 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -20,7 +20,7 @@ classifiers = [ dependencies = [ "asciitree>=0.3.3", "docker>=7.1.0", - "fluidize-sdk>=0.4.0", + "fluidize-sdk>=0.6.0", "jinja2>=3.1.6", "mlflow>=3.1.4", "networkx>=3.2.1", diff --git a/tests/integration/conftest.py b/tests/integration/conftest.py index b1a079b..f77b1d5 100644 --- a/tests/integration/conftest.py +++ b/tests/integration/conftest.py @@ -10,7 +10,7 @@ from fluidize.adapters.local.adapter import LocalAdapter from fluidize.client import FluidizeClient from fluidize.config import FluidizeConfig -from fluidize.managers.projects import Projects +from fluidize.managers.registry import RegistryManager @pytest.fixture @@ -79,9 +79,9 @@ def client() -> FluidizeClient: @pytest.fixture -def projects_manager(local_adapter: LocalAdapter) -> Projects: +def projects_manager(local_adapter: LocalAdapter) -> RegistryManager: """Create a Projects manager for integration testing.""" - return Projects(local_adapter) + return RegistryManager(local_adapter) @pytest.fixture diff --git a/tests/integration/test_graph_workflow.py b/tests/integration/test_graph_workflow.py index 19456e5..1ff63ff 100644 --- a/tests/integration/test_graph_workflow.py +++ b/tests/integration/test_graph_workflow.py @@ -7,7 +7,7 @@ from fluidize import FluidizeClient from fluidize.core.types.graph import GraphNode, Position, graphNodeData -from fluidize.managers.project_manager import Project +from fluidize.managers.project import ProjectManager from tests.fixtures.sample_graphs import SampleGraphs @@ -61,7 +61,7 @@ def test_complete_project_graph_workflow(self, client): description="Testing complete graph workflow", ) - assert isinstance(project, Project) + assert isinstance(project, ProjectManager) assert project.id == "integration-test-project" assert project.label == "Integration Test Project" @@ -356,7 +356,7 @@ def test_project_list_with_graphs(self, client): # Each project should be a Project wrapper with graph access for project in projects_list: - assert isinstance(project, Project) + assert isinstance(project, ProjectManager) assert hasattr(project, "graph") # Each project's graph should have one node diff --git a/tests/integration/test_simulations_manager.py b/tests/integration/test_simulations_manager.py new file mode 100644 index 0000000..43ef738 --- /dev/null +++ b/tests/integration/test_simulations_manager.py @@ -0,0 +1,40 @@ +"""Integration tests for SimulationsManager - tests real API connectivity.""" + +import pytest + +from fluidize.managers.simulations import SimulationsManager + + +class TestSimulationsManagerIntegration: + """Integration test suite for SimulationsManager class.""" + + @pytest.fixture + def mock_adapter(self): + """Create a mock adapter for testing.""" + from unittest.mock import Mock + + adapter = Mock() + return adapter + + def test_list_simulations_integration(self, mock_adapter): + """Integration test that actually calls the API and prints output.""" + + # Create manager without mocking SDK + manager = SimulationsManager(mock_adapter) + + # Act - make real API call + result = manager.list_simulations() + + # Assert basic functionality + assert isinstance(result, list) + + # Print results for manual verification + print("\n=== Integration Test Results ===") + print(f"Number of simulations found: {len(result)}") + for sim in result: + print("Simulation details:") + print(f" Name: {sim.name}") + print(f" ID: {sim.id}") + print(f" Description: {sim.description}") + print(f" Version: {sim.version}") + print("\n") diff --git a/tests/unit/backends/local/test_graph.py b/tests/unit/backends/local/test_graph.py index dfb197b..a61bbab 100644 --- a/tests/unit/backends/local/test_graph.py +++ b/tests/unit/backends/local/test_graph.py @@ -310,31 +310,33 @@ def test_individual_operations(self, sample_project, operation, method_name, arg else: processor_method.assert_called_once() - @patch("fluidize.adapters.local.graph.DataLoader") + @patch("fluidize.adapters.local.graph.nodeParameters_simulation") @patch("fluidize.adapters.local.graph.PathFinder") - def test_get_parameters_success(self, mock_pathfinder, mock_dataloader, sample_project): + def test_get_parameters_success(self, mock_pathfinder, mock_node_params, sample_project): """Test successful parameter retrieval.""" # Mock setup mock_parameters_path = Mock() - mock_pathfinder.get_node_parameters_path.return_value = mock_parameters_path - mock_dataloader.load_json.return_value = { - "parameters": [ - { - "name": "test_param", - "value": "test_value", - "type": "text", - "label": "Test Parameter", - "description": "A test parameter", - } - ] - } + mock_pathfinder.get_node_path.return_value = mock_parameters_path + + # Mock the parameters model instance + mock_params_instance = Mock() + mock_params_instance.parameters = [ + Parameter( + name="test_param", + value="test_value", + type="text", + label="Test Parameter", + description="A test parameter", + ) + ] + mock_node_params.from_file.return_value = mock_params_instance handler = GraphHandler() result = handler.get_parameters(sample_project, "test-node-id") # Verify calls - mock_pathfinder.get_node_parameters_path.assert_called_once_with(sample_project, "test-node-id") - mock_dataloader.load_json.assert_called_once_with(mock_parameters_path) + mock_pathfinder.get_node_path.assert_called_once_with(sample_project, "test-node-id") + mock_node_params.from_file.assert_called_once_with(mock_parameters_path) # Verify result assert len(result) == 1 @@ -342,16 +344,19 @@ def test_get_parameters_success(self, mock_pathfinder, mock_dataloader, sample_p assert result[0].name == "test_param" assert result[0].value == "test_value" - @patch("fluidize.adapters.local.graph.DataWriter") - @patch("fluidize.adapters.local.graph.DataLoader") + @patch("fluidize.adapters.local.graph.nodeParameters_simulation") @patch("fluidize.adapters.local.graph.PathFinder") - def test_upsert_parameter_new_parameter(self, mock_pathfinder, mock_dataloader, mock_datawriter, sample_project): + def test_upsert_parameter_new_parameter(self, mock_pathfinder, mock_node_params, sample_project): """Test upserting a new parameter.""" # Mock setup mock_parameters_path = Mock() - mock_pathfinder.get_node_parameters_path.return_value = mock_parameters_path - mock_dataloader.load_json.return_value = {"parameters": []} - mock_datawriter.write_json.return_value = True + mock_pathfinder.get_node_path.return_value = mock_parameters_path + + # Mock the parameters model instance with empty parameters + mock_params_instance = Mock() + mock_params_instance.parameters = [] + mock_node_params.from_file.return_value = mock_params_instance + mock_params_instance.save.return_value = None new_parameter = Parameter( name="new_param", value="new_value", type="text", label="New Parameter", description="A new parameter" @@ -361,41 +366,38 @@ def test_upsert_parameter_new_parameter(self, mock_pathfinder, mock_dataloader, result = handler.upsert_parameter(sample_project, "test-node-id", new_parameter) # Verify calls - mock_pathfinder.get_node_parameters_path.assert_called_once_with(sample_project, "test-node-id") - mock_dataloader.load_json.assert_called_once_with(mock_parameters_path) - mock_datawriter.write_json.assert_called_once() + mock_pathfinder.get_node_path.assert_called_once_with(sample_project, "test-node-id") + mock_node_params.from_file.assert_called_once_with(mock_parameters_path) + mock_params_instance.save.assert_called_once() - # Verify the written data contains the new parameter - written_data = mock_datawriter.write_json.call_args[1]["data"] - assert len(written_data["parameters"]) == 1 - assert written_data["parameters"][0]["name"] == "new_param" + # Verify the parameter was added to the instance + assert len(mock_params_instance.parameters) == 1 + assert mock_params_instance.parameters[0].name == "new_param" # Verify result assert result == new_parameter - @patch("fluidize.adapters.local.graph.DataWriter") - @patch("fluidize.adapters.local.graph.DataLoader") + @patch("fluidize.adapters.local.graph.nodeParameters_simulation") @patch("fluidize.adapters.local.graph.PathFinder") - def test_upsert_parameter_existing_parameter( - self, mock_pathfinder, mock_dataloader, mock_datawriter, sample_project - ): + def test_upsert_parameter_existing_parameter(self, mock_pathfinder, mock_node_params, sample_project): """Test upserting an existing parameter extends locations.""" # Mock setup with existing parameter mock_parameters_path = Mock() - mock_pathfinder.get_node_parameters_path.return_value = mock_parameters_path - mock_dataloader.load_json.return_value = { - "parameters": [ - { - "name": "existing_param", - "value": "existing_value", - "type": "text", - "label": "Existing Parameter", - "description": "An existing parameter", - "location": ["file1.py"], - } - ] - } - mock_datawriter.write_json.return_value = True + mock_pathfinder.get_node_path.return_value = mock_parameters_path + + # Mock the parameters model instance with existing parameter + existing_param = Parameter( + name="existing_param", + value="existing_value", + type="text", + label="Existing Parameter", + description="An existing parameter", + location=["file1.py"], + ) + mock_params_instance = Mock() + mock_params_instance.parameters = [existing_param] + mock_node_params.from_file.return_value = mock_params_instance + mock_params_instance.save.return_value = None update_parameter = Parameter( name="existing_param", @@ -409,24 +411,32 @@ def test_upsert_parameter_existing_parameter( handler = GraphHandler() result = handler.upsert_parameter(sample_project, "test-node-id", update_parameter) - # Verify the written data extends the location - written_data = mock_datawriter.write_json.call_args[1]["data"] - assert len(written_data["parameters"]) == 1 - param_data = written_data["parameters"][0] - assert param_data["name"] == "existing_param" - assert param_data["location"] == ["file1.py", "file2.py"] + # Verify calls + mock_pathfinder.get_node_path.assert_called_once_with(sample_project, "test-node-id") + mock_node_params.from_file.assert_called_once_with(mock_parameters_path) + mock_params_instance.save.assert_called_once() + + # Verify the parameter location was extended + updated_param = mock_params_instance.parameters[0] + assert updated_param.name == "existing_param" + assert updated_param.location == ["file1.py", "file2.py"] # Verify result assert result == update_parameter - @patch("fluidize.adapters.local.graph.DataWriter") + @patch("fluidize.adapters.local.graph.nodeParameters_simulation") @patch("fluidize.adapters.local.graph.PathFinder") - def test_set_parameters_success(self, mock_pathfinder, mock_datawriter, sample_project): + def test_set_parameters_success(self, mock_pathfinder, mock_node_params, sample_project): """Test setting parameters replaces all existing parameters.""" # Mock setup mock_parameters_path = Mock() - mock_pathfinder.get_node_parameters_path.return_value = mock_parameters_path - mock_datawriter.write_json.return_value = True + mock_pathfinder.get_node_path.return_value = mock_parameters_path + + # Mock the parameters model instance + mock_params_instance = Mock() + mock_params_instance.parameters = [] + mock_node_params.from_file.return_value = mock_params_instance + mock_params_instance.save.return_value = None parameters = [ Parameter(name="param1", value="value1", type="text", label="Parameter 1", description="First parameter"), @@ -439,42 +449,49 @@ def test_set_parameters_success(self, mock_pathfinder, mock_datawriter, sample_p result = handler.set_parameters(sample_project, "test-node-id", parameters) # Verify calls - mock_pathfinder.get_node_parameters_path.assert_called_once_with(sample_project, "test-node-id") - mock_datawriter.write_json.assert_called_once() + mock_pathfinder.get_node_path.assert_called_once_with(sample_project, "test-node-id") + mock_node_params.from_file.assert_called_once_with(mock_parameters_path) + mock_params_instance.save.assert_called_once() - # Verify the written data - written_data = mock_datawriter.write_json.call_args[1]["data"] - assert len(written_data["parameters"]) == 2 - assert written_data["parameters"][0]["name"] == "param1" - assert written_data["parameters"][1]["name"] == "param2" + # Verify the parameters were set correctly + assert mock_params_instance.parameters == parameters + assert len(mock_params_instance.parameters) == 2 + assert mock_params_instance.parameters[0].name == "param1" + assert mock_params_instance.parameters[1].name == "param2" # Verify result assert result == parameters - @patch("fluidize.adapters.local.graph.DataLoader") + @patch("fluidize.adapters.local.graph.nodeParameters_simulation") @patch("fluidize.adapters.local.graph.PathFinder") - def test_show_parameters_success(self, mock_pathfinder, mock_dataloader, sample_project): + def test_show_parameters_success(self, mock_pathfinder, mock_node_params, sample_project): """Test showing parameters in nice format.""" # Mock setup mock_parameters_path = Mock() - mock_pathfinder.get_node_parameters_path.return_value = mock_parameters_path - mock_dataloader.load_json.return_value = { - "parameters": [ - { - "name": "motor_strength", - "value": "20.0", - "type": "text", - "label": "Motor Strength", - "description": "Control signal strength for bat motor", - "scope": "simulation", - "location": ["source/pinata_simulation.py"], - } - ] - } + mock_pathfinder.get_node_path.return_value = mock_parameters_path + + # Mock the parameters model instance with a parameter + mock_params_instance = Mock() + mock_params_instance.parameters = [ + Parameter( + name="motor_strength", + value="20.0", + type="text", + label="Motor Strength", + description="Control signal strength for bat motor", + scope="simulation", + location=["source/pinata_simulation.py"], + ) + ] + mock_node_params.from_file.return_value = mock_params_instance handler = GraphHandler() result = handler.show_parameters(sample_project, "test-node-id") + # Verify calls + mock_pathfinder.get_node_path.assert_called_once_with(sample_project, "test-node-id") + mock_node_params.from_file.assert_called_once_with(mock_parameters_path) + # Verify the formatted output contains expected content assert "Parameters for node 'test-node-id':" in result assert "Name: motor_strength" in result @@ -485,17 +502,25 @@ def test_show_parameters_success(self, mock_pathfinder, mock_dataloader, sample_ assert "Scope: simulation" in result assert "Location: source/pinata_simulation.py" in result - @patch("fluidize.adapters.local.graph.DataLoader") + @patch("fluidize.adapters.local.graph.nodeParameters_simulation") @patch("fluidize.adapters.local.graph.PathFinder") - def test_show_parameters_no_parameters(self, mock_pathfinder, mock_dataloader, sample_project): + def test_show_parameters_no_parameters(self, mock_pathfinder, mock_node_params, sample_project): """Test showing parameters when none exist.""" # Mock setup for empty parameters mock_parameters_path = Mock() - mock_pathfinder.get_node_parameters_path.return_value = mock_parameters_path - mock_dataloader.load_json.return_value = {"parameters": []} + mock_pathfinder.get_node_path.return_value = mock_parameters_path + + # Mock the parameters model instance with empty parameters + mock_params_instance = Mock() + mock_params_instance.parameters = [] + mock_node_params.from_file.return_value = mock_params_instance handler = GraphHandler() result = handler.show_parameters(sample_project, "empty-node-id") + # Verify calls + mock_pathfinder.get_node_path.assert_called_once_with(sample_project, "empty-node-id") + mock_node_params.from_file.assert_called_once_with(mock_parameters_path) + # Verify the no parameters message assert result == "No parameters found for node 'empty-node-id'" diff --git a/tests/unit/core/modules/execute/test_execution_manager_simple.py b/tests/unit/core/modules/execute/test_execution_manager_simple.py index 6e1972b..d8cf45b 100644 --- a/tests/unit/core/modules/execute/test_execution_manager_simple.py +++ b/tests/unit/core/modules/execute/test_execution_manager_simple.py @@ -110,22 +110,27 @@ def test_execute_with_mode_kubernetes_not_implemented( assert result["success"] is False assert "not yet implemented" in result["error"].lower() - def test_execute_with_mode_direct(self, execution_manager): + def test_execute_with_mode_direct(self, execution_manager, sample_project, sample_node): """Test _execute_with_mode method directly.""" # Test Kubernetes mode - result = execution_manager._execute_with_mode(ExecutionMode.KUBERNETES, Mock()) + result = execution_manager._execute_with_mode( + ExecutionMode.KUBERNETES, Mock(), sample_project, sample_node, None + ) assert result["success"] is False assert "not yet implemented" in result["error"].lower() # Test unsupported mode (if any) mock_unsupported = Mock() mock_unsupported.value = "UNSUPPORTED_MODE" - result = execution_manager._execute_with_mode(mock_unsupported, Mock()) + result = execution_manager._execute_with_mode(mock_unsupported, Mock(), sample_project, sample_node, None) assert result["success"] is False assert "unsupported" in result["error"].lower() @patch("fluidize.core.modules.execute.execution_manager.DockerExecutionClient") - def test_execute_docker_client_creation(self, mock_docker_client_class, execution_manager): + @patch("fluidize.core.modules.execute.execution_manager.ExecutionLogger") + def test_execute_docker_client_creation( + self, mock_execution_logger, mock_docker_client_class, execution_manager, sample_project, sample_node + ): """Test Docker client creation in _execute_docker.""" mock_client = Mock() mock_client.pull_image.return_value = True @@ -137,25 +142,29 @@ def test_execute_docker_client_creation(self, mock_docker_client_class, executio mock_result.container_id = "container123" mock_client.run_container.return_value = mock_result mock_docker_client_class.return_value = mock_client + mock_execution_logger.save_execution_logs.return_value = True mock_spec = Mock() mock_spec.container_spec.image = "test:latest" mock_spec.volume_spec.volumes = [] - result = execution_manager._execute_docker(mock_spec) + result = execution_manager._execute_docker(mock_spec, sample_project, sample_node, None) assert execution_manager.docker_client is not None assert result["success"] is True assert result["execution_mode"] == "local_docker" + mock_execution_logger.save_execution_logs.assert_called_once() # Test reuse of existing client - result2 = execution_manager._execute_docker(mock_spec) + result2 = execution_manager._execute_docker(mock_spec, sample_project, sample_node, None) assert result2["success"] is True # Client should only be created once assert mock_docker_client_class.call_count == 1 @patch("fluidize.core.modules.execute.execution_manager.DockerExecutionClient") - def test_execute_docker_pull_failure(self, mock_docker_client_class, execution_manager): + def test_execute_docker_pull_failure( + self, mock_docker_client_class, execution_manager, sample_project, sample_node + ): """Test Docker execution with image pull failure.""" mock_client = Mock() mock_client.pull_image.return_value = False # Pull fails @@ -164,13 +173,16 @@ def test_execute_docker_pull_failure(self, mock_docker_client_class, execution_m mock_spec = Mock() mock_spec.container_spec.image = "nonexistent:latest" - result = execution_manager._execute_docker(mock_spec) + result = execution_manager._execute_docker(mock_spec, sample_project, sample_node, None) assert result["success"] is False assert "failed to pull image" in result["error"].lower() @patch("fluidize.core.modules.execute.execution_manager.VMExecutionClient") - def test_execute_vm_client_creation(self, mock_vm_client_class, execution_manager): + @patch("fluidize.core.modules.execute.execution_manager.ExecutionLogger") + def test_execute_vm_client_creation( + self, mock_execution_logger, mock_vm_client_class, execution_manager, sample_project, sample_node + ): """Test VM client creation in _execute_vm.""" mock_client = Mock() mock_result = Mock() @@ -181,20 +193,24 @@ def test_execute_vm_client_creation(self, mock_vm_client_class, execution_manage mock_result.command = "docker run test" mock_client.run_container.return_value = mock_result mock_vm_client_class.return_value = mock_client + mock_execution_logger.save_execution_logs.return_value = True mock_spec = Mock() mock_spec.container_spec = Mock() mock_spec.volume_spec.volumes = [] - result = execution_manager._execute_vm(mock_spec) + result = execution_manager._execute_vm(mock_spec, sample_project, sample_node, None) assert execution_manager.vm_client is not None assert result["success"] is True assert result["execution_mode"] == "vm_docker" assert "command" in result + mock_execution_logger.save_execution_logs.assert_called_once() @patch("fluidize.core.modules.execute.execution_manager.DockerExecutionClient") - def test_execute_docker_exception_handling(self, mock_docker_client_class, execution_manager): + def test_execute_docker_exception_handling( + self, mock_docker_client_class, execution_manager, sample_project, sample_node + ): """Test exception handling in _execute_docker.""" mock_client = Mock() mock_client.pull_image.side_effect = Exception("Docker daemon not running") @@ -203,14 +219,14 @@ def test_execute_docker_exception_handling(self, mock_docker_client_class, execu mock_spec = Mock() mock_spec.container_spec.image = "test:latest" - result = execution_manager._execute_docker(mock_spec) + result = execution_manager._execute_docker(mock_spec, sample_project, sample_node, None) assert result["success"] is False assert "docker daemon not running" in result["error"].lower() assert result["execution_mode"] == "local_docker" @patch("fluidize.core.modules.execute.execution_manager.VMExecutionClient") - def test_execute_vm_exception_handling(self, mock_vm_client_class, execution_manager): + def test_execute_vm_exception_handling(self, mock_vm_client_class, execution_manager, sample_project, sample_node): """Test exception handling in _execute_vm.""" mock_client = Mock() mock_client.run_container.side_effect = Exception("SSH connection failed") @@ -218,7 +234,7 @@ def test_execute_vm_exception_handling(self, mock_vm_client_class, execution_man mock_spec = Mock() - result = execution_manager._execute_vm(mock_spec) + result = execution_manager._execute_vm(mock_spec, sample_project, sample_node, None) assert result["success"] is False assert "ssh connection failed" in result["error"].lower() diff --git a/tests/unit/core/types/file_models/__init__.py b/tests/unit/core/types/file_models/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/tests/unit/core/types/file_models/test_json_file_model_base.py b/tests/unit/core/types/file_models/test_json_file_model_base.py new file mode 100644 index 0000000..f3609a8 --- /dev/null +++ b/tests/unit/core/types/file_models/test_json_file_model_base.py @@ -0,0 +1,300 @@ +"""Unit tests for JSONFileModelBase class.""" + +import json +import tempfile +from typing import ClassVar +from unittest.mock import patch + +import pytest +from pydantic import ValidationError +from upath import UPath + +from fluidize.core.types.file_models.json_file_model_base import JSONFileModelBase + + +class MockJSONFileModel(JSONFileModelBase): + """Test implementation of JSONFileModelBase.""" + + _filename: ClassVar[str] = "test.json" + test_field: str = "default_value" + + # Configure to forbid extra fields for validation tests + model_config: ClassVar = {"extra": "forbid"} + + class Key: + key = "test_key" + + +class MockJSONFileModelNoKey(JSONFileModelBase): + """Test implementation without Key configuration.""" + + _filename: ClassVar[str] = "test_no_key.json" + test_field: str = "default_value" + + +class MockJSONFileModelNoFilename(JSONFileModelBase): + """Test implementation without _filename.""" + + test_field: str = "default_value" + + +class TestJSONFileModelBase: + """Test suite for JSONFileModelBase class.""" + + def test_filepath_property_with_path(self): + """Test filepath property when path is set.""" + model = MockJSONFileModel(test_field="test") + test_path = UPath("/test/path/test.json") + model._filepath = test_path + + assert model.filepath == test_path + + def test_filepath_property_without_path(self): + """Test filepath property when path is not set.""" + model = MockJSONFileModel(test_field="test") + + with pytest.raises(ValueError): + _ = model.filepath + + def test_directory_property(self): + """Test directory property returns parent of filepath.""" + model = MockJSONFileModel(test_field="test") + test_path = UPath("/test/path/test.json") + model._filepath = test_path + + assert model.directory == test_path.parent + + def test_directory_property_without_path(self): + """Test directory property when filepath is not set.""" + model = MockJSONFileModel(test_field="test") + + with pytest.raises(ValueError): + _ = model.directory + + @patch("fluidize.core.utils.dataloader.data_loader.DataLoader") + def test_from_file_success(self, mock_data_loader): + """Test successful file loading.""" + mock_data_loader.load_json.return_value = {"test_field": "loaded_value"} + directory = UPath("/test/directory") + + result = MockJSONFileModel.from_file(directory) + + assert result.test_field == "loaded_value" + assert result._filepath == directory / "test.json" + mock_data_loader.load_json.assert_called_once_with(directory / "test.json") + + @patch("fluidize.core.utils.dataloader.data_loader.DataLoader") + def test_from_file_no_filename(self, mock_data_loader): + """Test from_file with class that has no _filename.""" + directory = UPath("/test/directory") + + with pytest.raises(TypeError): + MockJSONFileModelNoFilename.from_file(directory) + + mock_data_loader.load_json.assert_not_called() + + @patch("fluidize.core.utils.dataloader.data_loader.DataLoader") + def test_from_file_empty_data(self, mock_data_loader): + """Test from_file with empty data.""" + mock_data_loader.load_json.return_value = None + directory = UPath("/test/directory") + + with pytest.raises(FileNotFoundError): + MockJSONFileModel.from_file(directory) + + @patch("fluidize.core.utils.dataloader.data_loader.DataLoader") + def test_from_file_validation_error(self, mock_data_loader): + """Test from_file with validation error.""" + mock_data_loader.load_json.return_value = {"invalid_field": "value"} + directory = UPath("/test/directory") + + with pytest.raises(ValidationError): + MockJSONFileModel.from_file(directory) + + @patch("fluidize.core.utils.dataloader.data_loader.DataLoader") + def test_from_file_other_exception(self, mock_data_loader): + """Test from_file with other exception during model validation.""" + mock_data_loader.load_json.return_value = {"test_field": "value"} + directory = UPath("/test/directory") + + with ( + patch.object(MockJSONFileModel, "model_validate", side_effect=RuntimeError("Test error")), + pytest.raises(ValueError), + ): + MockJSONFileModel.from_file(directory) + + def test_from_dict_and_path_success(self): + """Test successful creation from dict and path.""" + data = {"test_field": "dict_value"} + path = UPath("/test/path/test.json") + + result = MockJSONFileModel.from_dict_and_path(data, path) + + assert result.test_field == "dict_value" + assert result._filepath == path + + def test_from_dict_and_path_empty_data(self): + """Test from_dict_and_path with empty data.""" + data = {} + path = UPath("/test/path/test.json") + + with pytest.raises(ValueError): + MockJSONFileModel.from_dict_and_path(data, path) + + def test_from_dict_and_path_validation_error(self): + """Test from_dict_and_path with validation error.""" + data = {"invalid_field": "value"} + path = UPath("/test/path/test.json") + + with pytest.raises(ValidationError): + MockJSONFileModel.from_dict_and_path(data, path) + + def test_from_dict_and_path_other_exception(self): + """Test from_dict_and_path with other exception during validation.""" + data = {"test_field": "value"} + path = UPath("/test/path/test.json") + + with ( + patch.object(MockJSONFileModel, "model_validate", side_effect=RuntimeError("Test error")), + pytest.raises(ValueError), + ): + MockJSONFileModel.from_dict_and_path(data, path) + + def test_model_dump_wrapped_with_key(self): + """Test model_dump_wrapped with Key configuration.""" + model = MockJSONFileModel(test_field="test_value") + + result = model.model_dump_wrapped() + + expected = {"test_key": {"test_field": "test_value"}} + assert result == expected + + def test_model_dump_wrapped_without_key(self): + """Test model_dump_wrapped without Key configuration.""" + model = MockJSONFileModelNoKey(test_field="test_value") + + result = model.model_dump_wrapped() + + expected = {"test_field": "test_value"} + assert result == expected + + def test_model_dump_wrapped_no_key_attribute(self): + """Test model_dump_wrapped when Key class has no key attribute.""" + + # Create a model class with a Key that has no key attribute + class MockModelNoKeyAttr(JSONFileModelBase): + _filename: ClassVar[str] = "test.json" + test_field: str = "default_value" + + class Key: + pass # No key attribute + + model = MockModelNoKeyAttr(test_field="test_value") + result = model.model_dump_wrapped() + + expected = {"test_field": "test_value"} + assert result == expected + + @patch("fluidize.core.utils.dataloader.data_writer.DataWriter") + @patch("fluidize.core.utils.dataloader.data_loader.DataLoader") + def test_save_with_directory(self, mock_data_loader, mock_data_writer): + """Test save with directory parameter.""" + mock_data_loader.load_json.return_value = {"existing": "data"} + model = MockJSONFileModel(test_field="test_value") + directory = UPath("/test/directory") + + model.save(directory) + + expected_path = directory / "test.json" + assert model._filepath == expected_path + mock_data_loader.load_json.assert_called_once() + mock_data_writer.write_json.assert_called_once() + + @patch("fluidize.core.utils.dataloader.data_writer.DataWriter") + @patch("fluidize.core.utils.dataloader.data_loader.DataLoader") + def test_save_without_directory(self, mock_data_loader, mock_data_writer): + """Test save without directory parameter using existing filepath.""" + mock_data_loader.load_json.return_value = {"existing": "data"} + model = MockJSONFileModel(test_field="test_value") + model._filepath = UPath("/existing/path/test.json") + + model.save() + + mock_data_loader.load_json.assert_called_once_with(UPath("/existing/path/test.json")) + mock_data_writer.write_json.assert_called_once() + + def test_save_no_filename_attribute(self): + """Test save with class that has no _filename attribute.""" + model = MockJSONFileModelNoFilename(test_field="test_value") + directory = UPath("/test/directory") + + with pytest.raises(TypeError): + model.save(directory) + + def test_save_no_filepath(self): + """Test save without filepath and without directory parameter.""" + model = MockJSONFileModel(test_field="test_value") + + with pytest.raises(ValueError): + model.save() + + @patch("fluidize.core.utils.dataloader.data_writer.DataWriter") + @patch("fluidize.core.utils.dataloader.data_loader.DataLoader") + def test_save_data_merge(self, mock_data_loader, mock_data_writer): + """Test that save merges new data with existing data.""" + existing_data = {"existing_key": "existing_value", "test_key": {"old_field": "old_value"}} + mock_data_loader.load_json.return_value = existing_data + + model = MockJSONFileModel(test_field="new_value") + model._filepath = UPath("/test/path/test.json") + + model.save() + + # Check that the data was merged correctly + call_args = mock_data_writer.write_json.call_args[0] + written_data = call_args[1] + + assert "existing_key" in written_data + assert written_data["existing_key"] == "existing_value" + assert written_data["test_key"]["test_field"] == "new_value" + + @patch("fluidize.core.utils.dataloader.data_writer.DataWriter") + @patch("fluidize.core.utils.dataloader.data_loader.DataLoader") + def test_edit_valid_attributes(self, mock_data_loader, mock_data_writer): + """Test edit with valid attributes.""" + mock_data_loader.load_json.return_value = {"test_key": {"test_field": "original_value"}} + model = MockJSONFileModel(test_field="original_value") + model._filepath = UPath("/test/path/test.json") + + model.edit(test_field="new_value") + + assert model.test_field == "new_value" + mock_data_writer.write_json.assert_called_once() + + def test_edit_invalid_attribute(self): + """Test edit with invalid attribute.""" + model = MockJSONFileModel(test_field="original_value") + model._filepath = UPath("/test/path/test.json") + + with pytest.raises(AttributeError): + model.edit(nonexistent_field="value") + + def test_integration_file_operations(self): + """Integration test for file operations.""" + with tempfile.TemporaryDirectory() as temp_dir: + directory = UPath(temp_dir) + test_file = directory / "test.json" + + # Create initial file content (only fields the model expects) + initial_data = {"test_field": "initial_value"} + with open(test_file, "w") as f: + json.dump(initial_data, f) + + # Load from file + model = MockJSONFileModel.from_file(directory) + assert model.test_field == "initial_value" + assert model.filepath == test_file + + # Test that methods exist + assert hasattr(model, "edit") + assert hasattr(model, "save") diff --git a/tests/unit/core/types/file_models/test_parameters_model.py b/tests/unit/core/types/file_models/test_parameters_model.py new file mode 100644 index 0000000..1228c2e --- /dev/null +++ b/tests/unit/core/types/file_models/test_parameters_model.py @@ -0,0 +1,268 @@ +"""Unit tests for ParametersModel class.""" + +from unittest.mock import patch + +import pytest +from pydantic import ValidationError +from upath import UPath + +from fluidize.core.types.file_models.parameters_model import ParametersModel +from fluidize.core.types.parameters import Parameter + + +class TestParametersModel: + """Test suite for ParametersModel class.""" + + def test_parameters_model_initialization_empty(self): + """Test ParametersModel initialization with empty parameters.""" + model = ParametersModel() + + assert model.parameters == [] + assert model._filename == "parameters.json" + + def test_parameters_model_initialization_with_params(self): + """Test ParametersModel initialization with parameters.""" + param1 = Parameter(value="val1", description="desc1", type="text", label="label1", name="param1") + param2 = Parameter(value="val2", description="desc2", type="number", label="label2", name="param2") + + model = ParametersModel(parameters=[param1, param2]) + + assert len(model.parameters) == 2 + assert model.parameters[0] == param1 + assert model.parameters[1] == param2 + + def test_unpack_and_validate_non_dict_data(self): + """Test _unpack_and_validate with non-dictionary data.""" + result = ParametersModel._unpack_and_validate("not_a_dict") + + assert result == "not_a_dict" + + def test_unpack_and_validate_dict_without_key(self): + """Test _unpack_and_validate with dict that doesn't contain the key.""" + data = {"other_key": "value"} + + result = ParametersModel._unpack_and_validate(data) + + assert result == data + + def test_unpack_and_validate_dict_with_key_list(self): + """Test _unpack_and_validate with dict containing parameters key with list.""" + param_data = [ + {"value": "val1", "description": "desc1", "type": "text", "label": "label1", "name": "param1"}, + {"value": "val2", "description": "desc2", "type": "number", "label": "label2", "name": "param2"}, + ] + data = {"parameters": param_data} + + result = ParametersModel._unpack_and_validate(data) + + assert result == {"parameters": param_data} + + def test_unpack_and_validate_dict_with_key_non_list(self): + """Test _unpack_and_validate with dict containing parameters key with non-list value.""" + data = {"parameters": "not_a_list"} + + result = ParametersModel._unpack_and_validate(data) + + assert result == {"parameters": []} + + def test_unpack_and_validate_dict_with_key_none(self): + """Test _unpack_and_validate with dict containing parameters key with None.""" + data = {"parameters": None} + + result = ParametersModel._unpack_and_validate(data) + + assert result == {"parameters": []} + + def test_unpack_and_validate_no_key_config(self): + """Test _unpack_and_validate when class has no Key config.""" + # Test with a different class - manually test the logic + data = {"test_field": "value", "other_field": "data"} + + # This would be the behavior if no Key class exists + # The method would just return the data as-is since there's no key to unpack + result = data # This simulates what would happen without Key config + + assert result == data + + def test_unpack_and_validate_key_config_no_key_attr(self): + """Test _unpack_and_validate when Key config has no key attribute.""" + + # Create a test class with Key config but no key attribute + class TestParametersModelNoKeyAttr(ParametersModel): + class Key: + pass + + data = { + "parameters": [{"value": "val", "description": "desc", "type": "text", "label": "label", "name": "param"}] + } + + result = TestParametersModelNoKeyAttr._unpack_and_validate(data) + + assert result == data + + def test_model_dump_wrapped(self): + """Test model_dump_wrapped returns correctly formatted data.""" + param1 = Parameter(value="val1", description="desc1", type="text", label="label1", name="param1") + param2 = Parameter(value="val2", description="desc2", type="number", label="label2", name="param2") + + model = ParametersModel(parameters=[param1, param2]) + + result = model.model_dump_wrapped() + + assert "parameters" in result + assert len(result["parameters"]) == 2 + assert result["parameters"][0]["name"] == "param1" + assert result["parameters"][1]["name"] == "param2" + + def test_model_dump_wrapped_empty_parameters(self): + """Test model_dump_wrapped with empty parameters.""" + model = ParametersModel() + + result = model.model_dump_wrapped() + + assert result == {"parameters": []} + + def test_key_class_configuration(self): + """Test that Key class is properly configured.""" + assert hasattr(ParametersModel, "Key") + assert hasattr(ParametersModel.Key, "key") + assert ParametersModel.Key.key == "parameters" + + @patch("fluidize.core.utils.dataloader.data_loader.DataLoader") + def test_from_file_with_wrapped_data(self, mock_data_loader): + """Test from_file with wrapped data structure.""" + param_data = [ + {"value": "val1", "description": "desc1", "type": "text", "label": "label1", "name": "param1"}, + {"value": "val2", "description": "desc2", "type": "number", "label": "label2", "name": "param2"}, + ] + wrapped_data = {"parameters": param_data} + mock_data_loader.load_json.return_value = wrapped_data + + directory = UPath("/test/directory") + result = ParametersModel.from_file(directory) + + assert len(result.parameters) == 2 + assert result.parameters[0].name == "param1" + assert result.parameters[1].name == "param2" + + @patch("fluidize.core.utils.dataloader.data_loader.DataLoader") + def test_from_file_with_unwrapped_data(self, mock_data_loader): + """Test from_file with unwrapped data structure.""" + param_data = [ + {"value": "val1", "description": "desc1", "type": "text", "label": "label1", "name": "param1"}, + ] + unwrapped_data = {"parameters": param_data} + mock_data_loader.load_json.return_value = unwrapped_data + + directory = UPath("/test/directory") + result = ParametersModel.from_file(directory) + + assert len(result.parameters) == 1 + assert result.parameters[0].name == "param1" + + @patch("fluidize.core.utils.dataloader.data_loader.DataLoader") + def test_from_file_invalid_parameter_data(self, mock_data_loader): + """Test from_file with invalid parameter data.""" + invalid_data = {"parameters": [{"invalid": "data"}]} + mock_data_loader.load_json.return_value = invalid_data + + directory = UPath("/test/directory") + + with pytest.raises(ValidationError): + ParametersModel.from_file(directory) + + def test_from_dict_and_path_with_valid_data(self): + """Test from_dict_and_path with valid data.""" + param_data = [ + {"value": "val1", "description": "desc1", "type": "text", "label": "label1", "name": "param1"}, + ] + data = {"parameters": param_data} + path = UPath("/test/path/parameters.json") + + result = ParametersModel.from_dict_and_path(data, path) + + assert len(result.parameters) == 1 + assert result.parameters[0].name == "param1" + assert result._filepath == path + + def test_model_validation_integration(self): + """Integration test for model validation with various data formats.""" + # Test with complete parameter data + complete_param = { + "value": "test_value", + "description": "Test description", + "type": "text", + "label": "Test Label", + "name": "test_param", + "latex": "\\alpha", + "location": ["section1", "subsection2"], + "options": [{"value": "opt1", "label": "Option 1"}], + "scope": "global", + } + + data = {"parameters": [complete_param]} + model = ParametersModel.model_validate(data) + + assert len(model.parameters) == 1 + param = model.parameters[0] + assert param.name == "test_param" + assert param.latex == "\\alpha" + assert param.location == ["section1", "subsection2"] + assert len(param.options) == 1 + assert param.scope == "global" + + @patch("fluidize.core.utils.dataloader.data_writer.DataWriter") + @patch("fluidize.core.utils.dataloader.data_loader.DataLoader") + def test_save_integration(self, mock_data_loader, mock_data_writer): + """Integration test for save functionality.""" + existing_data = {"other_key": "other_value"} + mock_data_loader.load_json.return_value = existing_data + + param1 = Parameter(value="val1", description="desc1", type="text", label="label1", name="param1") + model = ParametersModel(parameters=[param1]) + model._filepath = UPath("/test/path/parameters.json") + + model.save() + + # Verify that data was merged correctly + call_args = mock_data_writer.write_json.call_args[0] + written_data = call_args[1] + + assert "other_key" in written_data + assert "parameters" in written_data + assert len(written_data["parameters"]) == 1 + assert written_data["parameters"][0]["name"] == "param1" + + @patch("fluidize.core.utils.dataloader.data_writer.DataWriter") + @patch("fluidize.core.utils.dataloader.data_loader.DataLoader") + def test_edit_functionality(self, mock_data_loader, mock_data_writer): + """Test edit functionality inherited from base class.""" + mock_data_loader.load_json.return_value = {"parameters": []} + param1 = Parameter(value="val1", description="desc1", type="text", label="label1", name="param1") + model = ParametersModel(parameters=[param1]) + model._filepath = UPath("/test/path/parameters.json") + + # Edit the parameters list + new_param = Parameter(value="val2", description="desc2", type="text", label="label2", name="param2") + model.edit(parameters=[new_param]) + + assert len(model.parameters) == 1 + assert model.parameters[0].name == "param2" + mock_data_writer.write_json.assert_called_once() + + def test_parameters_field_default_factory(self): + """Test that parameters field uses default_factory correctly.""" + model1 = ParametersModel() + model2 = ParametersModel() + + # Ensure each instance gets its own list + assert model1.parameters is not model2.parameters + assert model1.parameters == [] + assert model2.parameters == [] + + # Modify one and ensure the other is unaffected + param = Parameter(value="val", description="desc", type="text", label="label", name="param") + model1.parameters.append(param) + + assert len(model1.parameters) == 1 + assert len(model2.parameters) == 0 diff --git a/tests/unit/core/utils/__init__.py b/tests/unit/core/utils/__init__.py new file mode 100644 index 0000000..dbb50c3 --- /dev/null +++ b/tests/unit/core/utils/__init__.py @@ -0,0 +1 @@ +# Unit tests for utils diff --git a/tests/unit/core/utils/logger/__init__.py b/tests/unit/core/utils/logger/__init__.py new file mode 100644 index 0000000..00e0fac --- /dev/null +++ b/tests/unit/core/utils/logger/__init__.py @@ -0,0 +1 @@ +# Unit tests for logger utilities diff --git a/tests/unit/core/utils/logger/test_execution_logger.py b/tests/unit/core/utils/logger/test_execution_logger.py new file mode 100644 index 0000000..c04cc0c --- /dev/null +++ b/tests/unit/core/utils/logger/test_execution_logger.py @@ -0,0 +1,189 @@ +"""Unit tests for ExecutionLogger class.""" + +from unittest.mock import Mock, patch +from types import SimpleNamespace + +import pytest + +from fluidize.core.types.project import ProjectSummary +from fluidize.core.utils.logger.execution_logger import ExecutionLogger + + +class TestExecutionLogger: + """Test suite for ExecutionLogger class.""" + + @pytest.fixture + def mock_project(self): + """Create a mock ProjectSummary for testing.""" + return Mock(spec=ProjectSummary) + + @pytest.fixture + def mock_run_metadata(self): + """Create mock run metadata with run_number.""" + return SimpleNamespace(run_number=42) + + @pytest.fixture + def invalid_run_metadata(self): + """Create invalid run metadata without run_number.""" + return SimpleNamespace(other_field="value") + + def test_validate_run_metadata_valid(self, mock_run_metadata): + """Test _validate_run_metadata with valid metadata.""" + result = ExecutionLogger._validate_run_metadata(mock_run_metadata) + assert result is True + + def test_validate_run_metadata_none(self): + """Test _validate_run_metadata with None.""" + result = ExecutionLogger._validate_run_metadata(None) + assert result is False + + def test_validate_run_metadata_missing_run_number(self, invalid_run_metadata): + """Test _validate_run_metadata with missing run_number.""" + result = ExecutionLogger._validate_run_metadata(invalid_run_metadata) + assert result is False + + def test_validate_run_metadata_none_run_number(self): + """Test _validate_run_metadata with None run_number.""" + metadata = SimpleNamespace(run_number=None) + result = ExecutionLogger._validate_run_metadata(metadata) + assert result is False + + @patch("fluidize.core.utils.logger.execution_logger.DataWriter") + @patch("fluidize.core.utils.logger.execution_logger.PathFinder") + def test_save_execution_logs_success(self, mock_pathfinder, mock_datawriter, mock_project, mock_run_metadata): + """Test successful execution log saving.""" + # Setup mocks + mock_logs_path = Mock() + mock_pathfinder.get_logs_path.return_value = mock_logs_path + mock_logs_path.__truediv__ = Mock(return_value="mock_nodes_dir") + mock_datawriter.create_directory.return_value = True + + mock_stdout_path = "mock_stdout_path" + mock_stderr_path = "mock_stderr_path" + mock_pathfinder.get_log_path.side_effect = [mock_stdout_path, mock_stderr_path] + mock_datawriter.write_text.return_value = True + + # Test execution + result = ExecutionLogger.save_execution_logs( + mock_project, mock_run_metadata, "test_node", "stdout content", "stderr content" + ) + + # Assertions + assert result is True + mock_pathfinder.get_logs_path.assert_called_once_with(mock_project, 42) + mock_datawriter.create_directory.assert_called_once() + assert mock_pathfinder.get_log_path.call_count == 2 + assert mock_datawriter.write_text.call_count == 2 + + def test_save_execution_logs_invalid_metadata(self, mock_project): + """Test save_execution_logs with invalid metadata.""" + result = ExecutionLogger.save_execution_logs( + mock_project, None, "test_node", "stdout content", "stderr content" + ) + assert result is False + + @patch("fluidize.core.utils.logger.execution_logger.DataWriter") + @patch("fluidize.core.utils.logger.execution_logger.PathFinder") + def test_save_execution_logs_exception(self, mock_pathfinder, mock_datawriter, mock_project, mock_run_metadata): + """Test save_execution_logs handling exceptions.""" + # Setup mocks to raise exception + mock_pathfinder.get_logs_path.side_effect = Exception("Test exception") + + # Test execution + result = ExecutionLogger.save_execution_logs( + mock_project, mock_run_metadata, "test_node", "stdout content", "stderr content" + ) + + # Assertions + assert result is False + + @patch("fluidize.core.utils.logger.execution_logger.DataWriter") + @patch("fluidize.core.utils.logger.execution_logger.PathFinder") + def test_save_stdout_success(self, mock_pathfinder, mock_datawriter, mock_project, mock_run_metadata): + """Test successful stdout log saving.""" + # Setup mocks + mock_stdout_path = "mock_stdout_path" + mock_pathfinder.get_log_path.return_value = mock_stdout_path + mock_datawriter.write_text.return_value = True + + # Test execution + result = ExecutionLogger.save_stdout(mock_project, mock_run_metadata, "test_node", "stdout content") + + # Assertions + assert result is True + mock_pathfinder.get_log_path.assert_called_once_with(mock_project, 42, "test_node", "stdout") + mock_datawriter.write_text.assert_called_once_with(mock_stdout_path, "stdout content") + + def test_save_stdout_no_content(self, mock_project, mock_run_metadata): + """Test save_stdout with empty content.""" + result = ExecutionLogger.save_stdout(mock_project, mock_run_metadata, "test_node", "") + assert result is False + + def test_save_stdout_invalid_metadata(self, mock_project): + """Test save_stdout with invalid metadata.""" + result = ExecutionLogger.save_stdout(mock_project, None, "test_node", "stdout content") + assert result is False + + @patch("fluidize.core.utils.logger.execution_logger.DataWriter") + @patch("fluidize.core.utils.logger.execution_logger.PathFinder") + def test_save_stderr_success(self, mock_pathfinder, mock_datawriter, mock_project, mock_run_metadata): + """Test successful stderr log saving.""" + # Setup mocks + mock_stderr_path = "mock_stderr_path" + mock_pathfinder.get_log_path.return_value = mock_stderr_path + mock_datawriter.write_text.return_value = True + + # Test execution + result = ExecutionLogger.save_stderr(mock_project, mock_run_metadata, "test_node", "stderr content") + + # Assertions + assert result is True + mock_pathfinder.get_log_path.assert_called_once_with(mock_project, 42, "test_node", "stderr") + mock_datawriter.write_text.assert_called_once_with(mock_stderr_path, "stderr content") + + def test_save_stderr_no_content(self, mock_project, mock_run_metadata): + """Test save_stderr with empty content.""" + result = ExecutionLogger.save_stderr(mock_project, mock_run_metadata, "test_node", "") + assert result is False + + def test_save_stderr_invalid_metadata(self, mock_project): + """Test save_stderr with invalid metadata.""" + result = ExecutionLogger.save_stderr(mock_project, None, "test_node", "stderr content") + assert result is False + + @patch("fluidize.core.utils.logger.execution_logger.DataWriter") + @patch("fluidize.core.utils.logger.execution_logger.PathFinder") + def test_save_stderr_exception(self, mock_pathfinder, mock_datawriter, mock_project, mock_run_metadata): + """Test save_stderr handling exceptions.""" + # Setup mocks to raise exception + mock_pathfinder.get_log_path.side_effect = Exception("Test exception") + + # Test execution + result = ExecutionLogger.save_stderr(mock_project, mock_run_metadata, "test_node", "stderr content") + + # Assertions + assert result is False + + @patch("fluidize.core.utils.logger.execution_logger.DataWriter") + @patch("fluidize.core.utils.logger.execution_logger.PathFinder") + def test_save_execution_logs_partial_success(self, mock_pathfinder, mock_datawriter, mock_project, mock_run_metadata): + """Test save_execution_logs with partial success (only stderr has content).""" + # Setup mocks + mock_logs_path = Mock() + mock_pathfinder.get_logs_path.return_value = mock_logs_path + mock_logs_path.__truediv__ = Mock(return_value="mock_nodes_dir") + mock_datawriter.create_directory.return_value = True + + mock_stderr_path = "mock_stderr_path" + mock_pathfinder.get_log_path.return_value = mock_stderr_path + mock_datawriter.write_text.return_value = True + + # Test execution with empty stdout + result = ExecutionLogger.save_execution_logs( + mock_project, mock_run_metadata, "test_node", "", "stderr content" + ) + + # Assertions - should return True since stderr was saved + assert result is True + mock_pathfinder.get_logs_path.assert_called_once_with(mock_project, 42) + mock_datawriter.create_directory.assert_called_once() diff --git a/tests/unit/managers/test_node.py b/tests/unit/managers/test_node.py new file mode 100644 index 0000000..48d1240 --- /dev/null +++ b/tests/unit/managers/test_node.py @@ -0,0 +1,486 @@ +"""Unit tests for NodeManager - node-scoped operations.""" + +from unittest.mock import Mock, patch + +import pytest + +from fluidize.core.types.graph import GraphData, GraphNode, Position, graphNodeData +from fluidize.core.types.parameters import Parameter +from fluidize.managers.node import NodeManager +from tests.fixtures.sample_projects import SampleProjects + + +class TestNodeManager: + """Test suite for NodeManager class.""" + + @pytest.fixture + def mock_adapter(self): + """Create a mock adapter with graph handler.""" + adapter = Mock() + adapter.graph = Mock() + return adapter + + @pytest.fixture + def sample_project(self): + """Sample project for testing.""" + return SampleProjects.standard_project() + + @pytest.fixture + def sample_node(self): + """Sample graph node for testing.""" + return GraphNode( + id="test-node-001", + position=Position(x=100.0, y=200.0), + data=graphNodeData(label="Test Node", simulation_id="test-sim-001"), + type="simulation", + ) + + @pytest.fixture + def node_manager(self, mock_adapter, sample_project): + """Create a NodeManager instance for testing.""" + return NodeManager(mock_adapter, sample_project, "test-node-001") + + def test_init(self, mock_adapter, sample_project): + """Test NodeManager initialization.""" + node_manager = NodeManager(mock_adapter, sample_project, "test-node-001") + + assert node_manager.adapter is mock_adapter + assert node_manager.project is sample_project + assert node_manager.node_id == "test-node-001" + + def test_get_node_success(self, node_manager, mock_adapter, sample_node): + """Test successful node retrieval.""" + graph_data = GraphData(nodes=[sample_node], edges=[]) + mock_adapter.graph.get_graph.return_value = graph_data + + result = node_manager.get_node() + + assert result == sample_node + mock_adapter.graph.get_graph.assert_called_once_with(node_manager.project) + + def test_get_node_not_found(self, node_manager, mock_adapter): + """Test node not found error.""" + graph_data = GraphData(nodes=[], edges=[]) + mock_adapter.graph.get_graph.return_value = graph_data + + with pytest.raises(ValueError, match="Node with ID 'test-node-001' not found"): + node_manager.get_node() + + def test_exists_true(self, node_manager, mock_adapter, sample_node): + """Test exists returns True when node exists.""" + graph_data = GraphData(nodes=[sample_node], edges=[]) + mock_adapter.graph.get_graph.return_value = graph_data + + assert node_manager.exists() is True + + def test_exists_false(self, node_manager, mock_adapter): + """Test exists returns False when node doesn't exist.""" + graph_data = GraphData(nodes=[], edges=[]) + mock_adapter.graph.get_graph.return_value = graph_data + + assert node_manager.exists() is False + + def test_delete(self, node_manager, mock_adapter): + """Test node deletion.""" + node_manager.delete() + + mock_adapter.graph.delete_node.assert_called_once_with(node_manager.project, "test-node-001") + + def test_update_position(self, node_manager, mock_adapter, sample_node): + """Test node position update.""" + graph_data = GraphData(nodes=[sample_node], edges=[]) + mock_adapter.graph.get_graph.return_value = graph_data + mock_adapter.graph.update_node_position.return_value = sample_node + + result = node_manager.update_position(300.0, 400.0) + + assert result == sample_node + assert sample_node.position.x == 300.0 + assert sample_node.position.y == 400.0 + mock_adapter.graph.update_node_position.assert_called_once() + + @patch("fluidize.managers.node.nodeMetadata_simulation") + def test_get_metadata(self, mock_metadata_class, node_manager): + """Test getting node metadata.""" + mock_metadata = Mock() + mock_metadata_class.from_file.return_value = mock_metadata + + result = node_manager.get_metadata() + + assert result == mock_metadata + mock_metadata_class.from_file.assert_called_once() + + @patch("fluidize.managers.node.nodeProperties_simulation") + def test_get_properties(self, mock_properties_class, node_manager): + """Test getting node properties.""" + mock_properties = Mock() + mock_properties_class.from_file.return_value = mock_properties + + result = node_manager.get_properties() + + assert result == mock_properties + mock_properties_class.from_file.assert_called_once() + + @patch("fluidize.managers.node.nodeParameters_simulation") + def test_get_parameters_model(self, mock_parameters_class, node_manager): + """Test getting node parameters model.""" + mock_parameters = Mock() + mock_parameters.parameters = [] + mock_parameters_class.from_file.return_value = mock_parameters + + result = node_manager.get_parameters_model() + + assert result == mock_parameters + mock_parameters_class.from_file.assert_called_once() + + @patch("fluidize.managers.node.nodeParameters_simulation") + def test_get_parameters(self, mock_parameters_class, node_manager): + """Test getting node parameters list.""" + mock_parameter = Parameter( + value="test_value", description="Test parameter", type="text", label="Test", name="test_param" + ) + mock_parameters = Mock() + mock_parameters.parameters = [mock_parameter] + mock_parameters_class.from_file.return_value = mock_parameters + + result = node_manager.get_parameters() + + assert result == [mock_parameter] + + def test_get_parameter_found(self, node_manager): + """Test getting a specific parameter by name.""" + mock_parameter = Parameter( + value="test_value", description="Test parameter", type="text", label="Test", name="test_param" + ) + + with patch.object(node_manager, "get_parameters", return_value=[mock_parameter]): + result = node_manager.get_parameter("test_param") + assert result == mock_parameter + + def test_get_parameter_not_found(self, node_manager): + """Test getting a parameter that doesn't exist.""" + with patch.object(node_manager, "get_parameters", return_value=[]): + result = node_manager.get_parameter("nonexistent") + assert result is None + + def test_validate_all_valid(self, node_manager, sample_node): + """Test validation when all components are valid.""" + + with ( + patch.object(node_manager, "get_node", return_value=sample_node), + patch.object(node_manager, "get_metadata"), + patch.object(node_manager, "get_properties"), + patch.object(node_manager, "get_parameters", return_value=[]), + ): + result = node_manager.validate() + + assert result["valid"] is True + assert result["graph_node_exists"] is True + assert result["metadata_exists"] is True + assert result["properties_exists"] is True + assert result["parameters_exists"] is True + assert len(result["errors"]) == 0 + + def test_validate_with_errors(self, node_manager): + """Test validation when there are errors.""" + with ( + patch.object(node_manager, "get_node", side_effect=ValueError("Node not found")), + patch.object(node_manager, "get_metadata", side_effect=FileNotFoundError("Metadata not found")), + patch.object(node_manager, "get_properties"), + patch.object(node_manager, "get_parameters", return_value=[]), + ): + result = node_manager.validate() + + assert result["valid"] is False + assert result["graph_node_exists"] is False + assert result["metadata_exists"] is False + assert len(result["errors"]) == 2 + assert "Node not found" in result["errors"][0] + assert "Metadata error: Metadata not found" in result["errors"][1] + + def test_id_property(self, node_manager): + """Test id property returns node_id.""" + assert node_manager.id == "test-node-001" + + def test_data_property(self, node_manager, mock_adapter, sample_node): + """Test data property returns node data.""" + graph_data = GraphData(nodes=[sample_node], edges=[]) + mock_adapter.graph.get_graph.return_value = graph_data + + result = node_manager.data + + assert result == sample_node.data + + @patch("fluidize.managers.node.nodeMetadata_simulation") + def test_update_metadata(self, mock_metadata_class, node_manager): + """Test updating node metadata.""" + mock_metadata = Mock() + mock_metadata_class.from_file.return_value = mock_metadata + + result = node_manager.update_metadata(name="New Name", description="New description") + + assert result == mock_metadata + mock_metadata.edit.assert_called_once_with(name="New Name", description="New description") + + @patch("fluidize.managers.node.nodeMetadata_simulation") + @patch("fluidize.managers.node.PathFinder") + def test_save_metadata(self, mock_path_finder, mock_metadata_class, node_manager): + """Test saving metadata to file.""" + mock_node_path = Mock() + mock_path_finder.get_node_path.return_value = mock_node_path + mock_metadata = Mock() + + node_manager.save_metadata(mock_metadata) + + mock_path_finder.get_node_path.assert_called_once_with(node_manager.project, "test-node-001") + mock_metadata.save.assert_called_once_with(mock_node_path) + + @patch("fluidize.managers.node.nodeProperties_simulation") + def test_update_properties(self, mock_properties_class, node_manager): + """Test updating node properties.""" + mock_properties = Mock() + mock_properties_class.from_file.return_value = mock_properties + + result = node_manager.update_properties(container_image="new:tag", should_run=False) + + assert result == mock_properties + mock_properties.edit.assert_called_once_with(container_image="new:tag", should_run=False) + + @patch("fluidize.managers.node.nodeProperties_simulation") + @patch("fluidize.managers.node.PathFinder") + def test_save_properties(self, mock_path_finder, mock_properties_class, node_manager): + """Test saving properties to file.""" + mock_node_path = Mock() + mock_path_finder.get_node_path.return_value = mock_node_path + mock_properties = Mock() + + node_manager.save_properties(mock_properties) + + mock_path_finder.get_node_path.assert_called_once_with(node_manager.project, "test-node-001") + mock_properties.save.assert_called_once_with(mock_node_path) + + @patch("fluidize.managers.node.nodeParameters_simulation") + def test_update_parameter_existing(self, mock_parameters_class, node_manager): + """Test updating an existing parameter.""" + existing_param = Parameter( + value="old_value", description="Old desc", type="text", label="Old", name="test_param" + ) + mock_parameters = Mock() + mock_parameters.parameters = [existing_param] + mock_parameters_class.from_file.return_value = mock_parameters + + new_param = Parameter(value="new_value", description="New desc", type="text", label="New", name="test_param") + + result = node_manager.update_parameter(new_param) + + assert result == new_param + assert existing_param.value == "new_value" + assert existing_param.description == "New desc" + mock_parameters.save.assert_called_once() + + @patch("fluidize.managers.node.nodeParameters_simulation") + def test_update_parameter_new(self, mock_parameters_class, node_manager): + """Test adding a new parameter.""" + mock_parameters = Mock() + mock_parameters.parameters = [] + mock_parameters_class.from_file.return_value = mock_parameters + + new_param = Parameter(value="new_value", description="New desc", type="text", label="New", name="new_param") + + result = node_manager.update_parameter(new_param) + + assert result == new_param + assert new_param in mock_parameters.parameters + mock_parameters.save.assert_called_once() + + @patch("fluidize.managers.node.nodeParameters_simulation") + def test_update_parameter_with_location(self, mock_parameters_class, node_manager): + """Test updating parameter with location extension.""" + existing_param = Parameter( + value="old_value", description="Old desc", type="text", label="Old", name="test_param", location=["old"] + ) + mock_parameters = Mock() + mock_parameters.parameters = [existing_param] + mock_parameters_class.from_file.return_value = mock_parameters + + new_param = Parameter( + value="new_value", + description="New desc", + type="text", + label="New", + name="test_param", + location=["new", "location"], + ) + + node_manager.update_parameter(new_param) + + assert existing_param.location == ["old", "new", "location"] + + @patch("fluidize.managers.node.nodeParameters_simulation") + def test_set_parameters(self, mock_parameters_class, node_manager): + """Test replacing all parameters.""" + mock_parameters = Mock() + mock_parameters.parameters = [] + mock_parameters_class.from_file.return_value = mock_parameters + + new_params = [ + Parameter(value="val1", description="desc1", type="text", label="label1", name="param1"), + Parameter(value="val2", description="desc2", type="text", label="label2", name="param2"), + ] + + result = node_manager.set_parameters(new_params) + + assert result == new_params + assert mock_parameters.parameters == new_params + mock_parameters.save.assert_called_once() + + @patch("fluidize.managers.node.nodeParameters_simulation") + def test_remove_parameter_success(self, mock_parameters_class, node_manager): + """Test successfully removing a parameter.""" + param1 = Parameter(value="val1", description="desc1", type="text", label="label1", name="param1") + param2 = Parameter(value="val2", description="desc2", type="text", label="label2", name="param2") + + mock_parameters = Mock() + mock_parameters.parameters = [param1, param2] + mock_parameters_class.from_file.return_value = mock_parameters + + result = node_manager.remove_parameter("param1") + + assert result is True + assert mock_parameters.parameters == [param2] + mock_parameters.save.assert_called_once() + + @patch("fluidize.managers.node.nodeParameters_simulation") + def test_remove_parameter_not_found(self, mock_parameters_class, node_manager): + """Test removing a parameter that doesn't exist.""" + param1 = Parameter(value="val1", description="desc1", type="text", label="label1", name="param1") + + mock_parameters = Mock() + mock_parameters.parameters = [param1] + mock_parameters_class.from_file.return_value = mock_parameters + + result = node_manager.remove_parameter("nonexistent") + + assert result is False + assert mock_parameters.parameters == [param1] + mock_parameters.save.assert_not_called() + + def test_show_parameters_empty(self, node_manager): + """Test showing parameters when no parameters exist.""" + with patch.object(node_manager, "get_parameters", return_value=[]): + result = node_manager.show_parameters() + assert result == "No parameters found for node 'test-node-001'" + + def test_show_parameters_with_data(self, node_manager): + """Test showing parameters with data.""" + param1 = Parameter( + value="value1", + description="Description 1", + type="text", + label="Label 1", + name="param1", + latex="\\alpha", + location=["section1"], + scope="global", + ) + param2 = Parameter(value="value2", description="Description 2", type="number", label="Label 2", name="param2") + + with patch.object(node_manager, "get_parameters", return_value=[param1, param2]): + result = node_manager.show_parameters() + + assert "Parameters for node 'test-node-001':" in result + assert "Parameter 1:" in result + assert "Name: param1" in result + assert "Value: value1" in result + assert "LaTeX: \\alpha" in result + assert "Location: ['section1']" in result + assert "Scope: global" in result + assert "Parameter 2:" in result + assert "Name: param2" in result + + @patch("fluidize.managers.node.PathFinder") + def test_get_node_directory(self, mock_path_finder, node_manager): + """Test getting node directory path.""" + mock_path = Mock() + mock_path_finder.get_node_path.return_value = mock_path + + result = node_manager.get_node_directory() + + assert result == mock_path + mock_path_finder.get_node_path.assert_called_once_with(node_manager.project, "test-node-001") + + @patch("fluidize.managers.node.PathFinder") + def test_get_metadata_path(self, mock_path_finder, node_manager): + """Test getting metadata file path.""" + mock_node_path = Mock() + mock_path_finder.get_node_path.return_value = mock_node_path + + # Mock the __truediv__ method to handle the / operator + mock_node_path.__truediv__ = Mock(return_value="mocked_metadata_path") + + node_manager.get_metadata_path() + + mock_path_finder.get_node_path.assert_called_once_with(node_manager.project, "test-node-001") + mock_node_path.__truediv__.assert_called_once() + + @patch("fluidize.managers.node.PathFinder") + def test_get_properties_path(self, mock_path_finder, node_manager): + """Test getting properties file path.""" + mock_path = Mock() + mock_path_finder.get_properties_path.return_value = mock_path + + result = node_manager.get_properties_path() + + assert result == mock_path + mock_path_finder.get_properties_path.assert_called_once_with(node_manager.project, "test-node-001") + + @patch("fluidize.managers.node.PathFinder") + def test_get_parameters_path(self, mock_path_finder, node_manager): + """Test getting parameters file path.""" + mock_path = Mock() + mock_path_finder.get_node_parameters_path.return_value = mock_path + + result = node_manager.get_parameters_path() + + assert result == mock_path + mock_path_finder.get_node_parameters_path.assert_called_once_with(node_manager.project, "test-node-001") + + def test_to_dict_success(self, node_manager, sample_node): + """Test converting node to dictionary successfully.""" + mock_metadata = Mock() + mock_metadata.model_dump.return_value = {"name": "Test Node"} + mock_properties = Mock() + mock_properties.model_dump.return_value = {"container_image": "test:latest"} + mock_parameter = Mock() + mock_parameter.model_dump.return_value = {"name": "param1", "value": "value1"} + + with ( + patch.object(node_manager, "get_node", return_value=sample_node), + patch.object(node_manager, "get_metadata", return_value=mock_metadata), + patch.object(node_manager, "get_properties", return_value=mock_properties), + patch.object(node_manager, "get_parameters", return_value=[mock_parameter]), + patch.object(node_manager, "get_node_directory", return_value="/path/to/node"), + patch.object(node_manager, "get_metadata_path", return_value="/path/to/metadata.yaml"), + patch.object(node_manager, "get_properties_path", return_value="/path/to/properties.yaml"), + patch.object(node_manager, "get_parameters_path", return_value="/path/to/parameters.json"), + ): + result = node_manager.to_dict() + + assert "graph_node" in result + assert "metadata" in result + assert "properties" in result + assert "parameters" in result + assert "paths" in result + assert result["metadata"] == {"name": "Test Node"} + assert result["properties"] == {"container_image": "test:latest"} + assert len(result["parameters"]) == 1 + + def test_to_dict_error(self, node_manager): + """Test to_dict when an error occurs.""" + with patch.object(node_manager, "get_node", side_effect=Exception("Test error")): + result = node_manager.to_dict() + + assert "error" in result + assert result["error"] == "Test error" + assert result["node_id"] == "test-node-001" + assert result["project"] == node_manager.project.id diff --git a/tests/unit/managers/test_project.py b/tests/unit/managers/test_project.py index 5d9de1a..aa79276 100644 --- a/tests/unit/managers/test_project.py +++ b/tests/unit/managers/test_project.py @@ -4,8 +4,8 @@ import pytest -from fluidize.managers.project_graph import ProjectGraph -from fluidize.managers.project_manager import Project +from fluidize.managers.graph import GraphManager +from fluidize.managers.project import ProjectManager from tests.fixtures.sample_projects import SampleProjects @@ -27,11 +27,11 @@ def sample_project_summary(self): @pytest.fixture def project_wrapper(self, mock_adapter, sample_project_summary): """Create a Project wrapper instance for testing.""" - return Project(mock_adapter, sample_project_summary) + return ProjectManager(mock_adapter, sample_project_summary) def test_init(self, mock_adapter, sample_project_summary): """Test Project wrapper initialization.""" - project = Project(mock_adapter, sample_project_summary) + project = ProjectManager(mock_adapter, sample_project_summary) assert project._adapter is mock_adapter assert project._project_summary is sample_project_summary @@ -44,7 +44,7 @@ def test_graph_property_lazy_initialization(self, project_wrapper, mock_adapter) # Access graph property graph = project_wrapper.graph - assert isinstance(graph, ProjectGraph) + assert isinstance(graph, GraphManager) assert project_wrapper._graph is graph # Cached assert graph.adapter is mock_adapter assert graph.project is project_wrapper._project_summary @@ -97,7 +97,7 @@ def test_created_at_property_with_attribute(self, mock_adapter): project_summary.status = "active" project_summary.created_at = "2024-01-01T00:00:00Z" - project = Project(mock_adapter, project_summary) + project = ProjectManager(mock_adapter, project_summary) assert project.created_at == "2024-01-01T00:00:00Z" @@ -120,7 +120,7 @@ def test_updated_at_property_with_attribute(self, mock_adapter): project_summary.status = "active" project_summary.updated_at = "2024-01-01T12:00:00Z" - project = Project(mock_adapter, project_summary) + project = ProjectManager(mock_adapter, project_summary) assert project.updated_at == "2024-01-01T12:00:00Z" @@ -149,7 +149,7 @@ def test_to_dict_complete_project(self, project_wrapper, sample_project_summary) def test_to_dict_minimal_project(self, mock_adapter): """Test to_dict with minimal project data.""" minimal_summary = SampleProjects.minimal_project() - project = Project(mock_adapter, minimal_summary) + project = ProjectManager(mock_adapter, minimal_summary) result = project.to_dict() @@ -179,30 +179,13 @@ def test_to_dict_with_timestamps(self, mock_adapter): project_summary.created_at = "2024-01-01T00:00:00Z" project_summary.updated_at = "2024-01-01T12:00:00Z" - project = Project(mock_adapter, project_summary) + project = ProjectManager(mock_adapter, project_summary) result = project.to_dict() assert result["created_at"] == "2024-01-01T00:00:00Z" assert result["updated_at"] == "2024-01-01T12:00:00Z" - def test_repr(self, project_wrapper, sample_project_summary): - """Test __repr__ method.""" - result = repr(project_wrapper) - expected = f"Project(id='{sample_project_summary.id}', label='{sample_project_summary.label}')" - assert result == expected - - def test_repr_with_none_label(self, mock_adapter): - """Test __repr__ method when label is None.""" - minimal_summary = SampleProjects.minimal_project() - project = Project(mock_adapter, minimal_summary) - - result = repr(project) - # Handle case where minimal project might have label=None or no label attribute - label_value = getattr(minimal_summary, "label", None) - expected = f"Project(id='{minimal_summary.id}', label='{label_value}')" - assert result == expected - def test_str_with_label(self, project_wrapper, sample_project_summary): """Test __str__ method with label.""" result = str(project_wrapper) @@ -212,7 +195,7 @@ def test_str_with_label(self, project_wrapper, sample_project_summary): def test_str_without_label(self, mock_adapter): """Test __str__ method without label.""" minimal_summary = SampleProjects.minimal_project() - project = Project(mock_adapter, minimal_summary) + project = ProjectManager(mock_adapter, minimal_summary) result = str(project) expected = f"Project {minimal_summary.id}: No label" @@ -260,12 +243,12 @@ def test_adapter_preservation(self, project_wrapper, mock_adapter): assert graph.adapter is mock_adapter def test_graph_integration(self, project_wrapper, mock_adapter, sample_project_summary): - """Test integration between Project wrapper and ProjectGraph.""" + """Test integration between Project wrapper and GraphManager.""" # Access graph graph = project_wrapper.graph # Verify proper integration - assert isinstance(graph, ProjectGraph) + assert isinstance(graph, GraphManager) assert graph.adapter is mock_adapter assert graph.project is sample_project_summary @@ -277,7 +260,7 @@ def test_graph_integration(self, project_wrapper, mock_adapter, sample_project_s def test_wrapper_with_different_project_types(self, mock_adapter, project_fixture): """Test Project wrapper with different types of ProjectSummary objects.""" project_summary = getattr(SampleProjects, project_fixture)() - project = Project(mock_adapter, project_summary) + project = ProjectManager(mock_adapter, project_summary) # Basic functionality should work for all project types assert project.id == project_summary.id @@ -285,12 +268,9 @@ def test_wrapper_with_different_project_types(self, mock_adapter, project_fixtur # Graph property should be accessible graph = project.graph - assert isinstance(graph, ProjectGraph) + assert isinstance(graph, GraphManager) - # String representations should work - repr_result = repr(project) + # String representation should work str_result = str(project) - assert isinstance(repr_result, str) assert isinstance(str_result, str) - assert project_summary.id in repr_result assert project_summary.id in str_result diff --git a/tests/unit/managers/test_project_graph.py b/tests/unit/managers/test_project_graph.py index 5f3f864..f411022 100644 --- a/tests/unit/managers/test_project_graph.py +++ b/tests/unit/managers/test_project_graph.py @@ -1,4 +1,4 @@ -"""Unit tests for ProjectGraph manager - project-scoped graph operations.""" +"""Unit tests for GraphManager manager - project-scoped graph operations.""" import datetime from unittest.mock import Mock @@ -8,13 +8,14 @@ from fluidize.core.types.node import author, nodeMetadata_simulation, nodeProperties_simulation, tag from fluidize.core.types.parameters import Parameter from fluidize.core.types.runs import RunStatus -from fluidize.managers.project_graph import ProjectGraph +from fluidize.managers.graph import GraphManager +from fluidize.managers.node import NodeManager from tests.fixtures.sample_graphs import SampleGraphs from tests.fixtures.sample_projects import SampleProjects -class TestProjectGraph: - """Test suite for ProjectGraph manager class.""" +class TestGraphManager: + """Test suite for GraphManager manager class.""" @pytest.fixture def mock_adapter(self): @@ -30,14 +31,14 @@ def sample_project(self): @pytest.fixture def project_graph(self, mock_adapter, sample_project): - """Create a ProjectGraph instance for testing.""" - return ProjectGraph(mock_adapter, sample_project) + """Create a GraphManager instance for testing.""" + return GraphManager(mock_adapter, sample_project) def test_init_with_graph_initialization(self, mock_adapter, sample_project): - """Test ProjectGraph initialization triggers graph initialization.""" + """Test GraphManager initialization triggers graph initialization.""" mock_adapter.graph.ensure_graph_initialized = Mock() - project_graph = ProjectGraph(mock_adapter, sample_project) + project_graph = GraphManager(mock_adapter, sample_project) assert project_graph.adapter is mock_adapter assert project_graph.project is sample_project @@ -49,7 +50,7 @@ def test_init_without_graph_handler(self, sample_project): del adapter_without_graph.graph # Remove graph attribute # Should not raise error - project_graph = ProjectGraph(adapter_without_graph, sample_project) + project_graph = GraphManager(adapter_without_graph, sample_project) assert project_graph.adapter is adapter_without_graph assert project_graph.project is sample_project @@ -61,7 +62,7 @@ def test_init_without_ensure_method(self, sample_project): del adapter.graph.ensure_graph_initialized # Remove ensure method # Should not raise error - project_graph = ProjectGraph(adapter, sample_project) + project_graph = GraphManager(adapter, sample_project) assert project_graph.adapter is adapter assert project_graph.project is sample_project @@ -94,7 +95,8 @@ def test_add_node_success(self, project_graph, mock_adapter): result = project_graph.add_node(node) - assert result == node + assert isinstance(result, NodeManager) + assert result.node_id == node.id mock_adapter.graph.insert_node.assert_called_once_with( project_graph.project, node, @@ -108,7 +110,8 @@ def test_add_node_with_sim_global_false(self, project_graph, mock_adapter): result = project_graph.add_node(node, sim_global=False) - assert result == node + assert isinstance(result, NodeManager) + assert result.node_id == node.id mock_adapter.graph.insert_node.assert_called_once_with(project_graph.project, node, False) def test_add_node_from_scratch_success(self, project_graph, mock_adapter): @@ -142,7 +145,8 @@ def test_add_node_from_scratch_success(self, project_graph, mock_adapter): result = project_graph.add_node_from_scratch(node, node_properties, node_metadata) - assert result == node + assert isinstance(result, NodeManager) + assert result.node_id == node.id mock_adapter.graph.insert_node_from_scratch.assert_called_once_with( project_graph.project, node, @@ -175,7 +179,8 @@ def test_add_node_from_scratch_with_repo_link(self, project_graph, mock_adapter) result = project_graph.add_node_from_scratch(node, node_properties, node_metadata, repo_link) - assert result == node + assert isinstance(result, NodeManager) + assert result.node_id == node.id mock_adapter.graph.insert_node_from_scratch.assert_called_once_with( project_graph.project, node, node_properties, node_metadata, repo_link ) @@ -273,12 +278,12 @@ def test_adapter_error_propagation_add_edge(self, project_graph, mock_adapter): project_graph.add_edge(edge) def test_project_scoping(self, mock_adapter): - """Test that different ProjectGraph instances are properly scoped to their projects.""" + """Test that different GraphManager instances are properly scoped to their projects.""" project1 = SampleProjects.standard_project() project2 = SampleProjects.minimal_project() - graph1 = ProjectGraph(mock_adapter, project1) - graph2 = ProjectGraph(mock_adapter, project2) + graph1 = GraphManager(mock_adapter, project1) + graph2 = GraphManager(mock_adapter, project2) node = SampleGraphs.sample_nodes()[0] @@ -295,7 +300,7 @@ def test_project_scoping(self, mock_adapter): assert calls[1][0][0] == project2 # Second call with project2 def test_all_methods_delegate_to_adapter(self, project_graph, mock_adapter): - """Test that all ProjectGraph methods properly delegate to adapter.""" + """Test that all GraphManager methods properly delegate to adapter.""" # Setup return values mock_graph_data = SampleGraphs.single_node_graph() mock_node = SampleGraphs.sample_nodes()[0] @@ -344,7 +349,8 @@ def test_add_node_from_scratch_delegates_to_adapter(self, project_graph, mock_ad result = project_graph.add_node_from_scratch(node, node_properties, node_metadata) - assert result == node + assert isinstance(result, NodeManager) + assert result.node_id == node.id mock_adapter.graph.insert_node_from_scratch.assert_called_once_with( project_graph.project, node, node_properties, node_metadata, None ) @@ -462,8 +468,8 @@ def test_parameter_methods_use_correct_project_context(self, mock_adapter): project1 = SampleProjects.standard_project() project2 = SampleProjects.minimal_project() - graph1 = ProjectGraph(mock_adapter, project1) - graph2 = ProjectGraph(mock_adapter, project2) + graph1 = GraphManager(mock_adapter, project1) + graph2 = GraphManager(mock_adapter, project2) parameter = Parameter( name="test_param", value="test_value", type="text", label="Test Parameter", description="A test parameter" diff --git a/tests/unit/managers/test_projects.py b/tests/unit/managers/test_projects.py index 5c48621..a5d3d41 100644 --- a/tests/unit/managers/test_projects.py +++ b/tests/unit/managers/test_projects.py @@ -4,7 +4,7 @@ import pytest -from fluidize.managers.projects import Projects +from fluidize.managers.registry import RegistryManager from tests.fixtures.sample_projects import SampleProjects @@ -21,17 +21,17 @@ def mock_adapter(self): @pytest.fixture def projects_manager(self, mock_adapter): """Create a Projects manager instance for testing.""" - return Projects(mock_adapter) + return RegistryManager(mock_adapter) def test_init(self, mock_adapter): """Test Projects manager initialization.""" - manager = Projects(mock_adapter) + manager = RegistryManager(mock_adapter) assert manager.adapter is mock_adapter def test_create_project_with_all_fields(self, projects_manager, mock_adapter): """Test create method with all optional fields.""" - from fluidize.managers.project_manager import Project + from fluidize.managers.project import ProjectManager sample_project = SampleProjects.standard_project() mock_adapter.projects.upsert.return_value = sample_project @@ -44,7 +44,7 @@ def test_create_project_with_all_fields(self, projects_manager, mock_adapter): status=sample_project.status, ) - assert isinstance(result, Project) + assert isinstance(result, ProjectManager) assert result.id == sample_project.id assert result.label == sample_project.label assert result.description == sample_project.description @@ -60,7 +60,7 @@ def test_create_project_with_all_fields(self, projects_manager, mock_adapter): def test_create_project_minimal(self, projects_manager, mock_adapter): """Test create method with minimal required fields.""" - from fluidize.managers.project_manager import Project + from fluidize.managers.project import ProjectManager project_id = "minimal-create" minimal_project = SampleProjects.minimal_project() @@ -68,7 +68,7 @@ def test_create_project_minimal(self, projects_manager, mock_adapter): result = projects_manager.create(project_id) - assert isinstance(result, Project) + assert isinstance(result, ProjectManager) assert result.id == minimal_project.id assert result.metadata_version == minimal_project.metadata_version mock_adapter.projects.upsert.assert_called_once_with( @@ -77,7 +77,7 @@ def test_create_project_minimal(self, projects_manager, mock_adapter): def test_create_project_partial_fields(self, projects_manager, mock_adapter): """Test create method with some optional fields.""" - from fluidize.managers.project_manager import Project + from fluidize.managers.project import ProjectManager sample_project = SampleProjects.standard_project() mock_adapter.projects.upsert.return_value = sample_project @@ -86,7 +86,7 @@ def test_create_project_partial_fields(self, projects_manager, mock_adapter): project_id="partial-create", label="Partial Project", description="Only some fields provided" ) - assert isinstance(result, Project) + assert isinstance(result, ProjectManager) assert result.id == sample_project.id mock_adapter.projects.upsert.assert_called_once_with( id="partial-create", @@ -98,7 +98,7 @@ def test_create_project_partial_fields(self, projects_manager, mock_adapter): def test_get_project(self, projects_manager, mock_adapter): """Test get method retrieves project by ID.""" - from fluidize.managers.project_manager import Project + from fluidize.managers.project import ProjectManager sample_project = SampleProjects.standard_project() project_id = sample_project.id @@ -106,7 +106,7 @@ def test_get_project(self, projects_manager, mock_adapter): result = projects_manager.get(project_id) - assert isinstance(result, Project) + assert isinstance(result, ProjectManager) assert result.id == sample_project.id mock_adapter.projects.retrieve.assert_called_once_with(project_id) @@ -131,7 +131,7 @@ def test_list_projects_empty(self, projects_manager, mock_adapter): def test_list_projects_with_data(self, projects_manager, mock_adapter): """Test list method with multiple projects.""" - from fluidize.managers.project_manager import Project + from fluidize.managers.project import ProjectManager sample_projects = SampleProjects.projects_for_listing() mock_adapter.projects.list.return_value = sample_projects @@ -141,12 +141,12 @@ def test_list_projects_with_data(self, projects_manager, mock_adapter): assert isinstance(result, list) assert len(result) == 3 for project in result: - assert isinstance(project, Project) + assert isinstance(project, ProjectManager) mock_adapter.projects.list.assert_called_once() def test_update_project_with_all_fields(self, projects_manager, mock_adapter): """Test update method with all optional fields.""" - from fluidize.managers.project_manager import Project + from fluidize.managers.project import ProjectManager sample_project = SampleProjects.standard_project() project_id = sample_project.id @@ -162,7 +162,7 @@ def test_update_project_with_all_fields(self, projects_manager, mock_adapter): status=update_data["status"], ) - assert isinstance(result, Project) + assert isinstance(result, ProjectManager) assert result.id == sample_project.id mock_adapter.projects.upsert.assert_called_once_with( id=project_id, @@ -174,7 +174,7 @@ def test_update_project_with_all_fields(self, projects_manager, mock_adapter): def test_update_project_partial_fields(self, projects_manager, mock_adapter): """Test update method with only some fields.""" - from fluidize.managers.project_manager import Project + from fluidize.managers.project import ProjectManager sample_project = SampleProjects.standard_project() project_id = "update-partial" @@ -184,7 +184,7 @@ def test_update_project_partial_fields(self, projects_manager, mock_adapter): project_id=project_id, label="Updated Label", description="Updated Description" ) - assert isinstance(result, Project) + assert isinstance(result, ProjectManager) assert result.id == sample_project.id mock_adapter.projects.upsert.assert_called_once_with( id=project_id, label="Updated Label", description="Updated Description" @@ -192,7 +192,7 @@ def test_update_project_partial_fields(self, projects_manager, mock_adapter): def test_update_project_no_optional_fields(self, projects_manager, mock_adapter): """Test update method with only project_id.""" - from fluidize.managers.project_manager import Project + from fluidize.managers.project import ProjectManager sample_project = SampleProjects.standard_project() project_id = "update-id-only" @@ -200,7 +200,7 @@ def test_update_project_no_optional_fields(self, projects_manager, mock_adapter) result = projects_manager.update(project_id=project_id) - assert isinstance(result, Project) + assert isinstance(result, ProjectManager) assert result.id == sample_project.id mock_adapter.projects.upsert.assert_called_once_with(id=project_id) @@ -215,7 +215,7 @@ def test_update_project_no_optional_fields(self, projects_manager, mock_adapter) ) def test_update_project_single_field(self, projects_manager, mock_adapter, field_name, field_value): """Test update method with individual fields.""" - from fluidize.managers.project_manager import Project + from fluidize.managers.project import ProjectManager sample_project = SampleProjects.standard_project() project_id = "single-field-update" @@ -225,7 +225,7 @@ def test_update_project_single_field(self, projects_manager, mock_adapter, field result = projects_manager.update(**kwargs) - assert isinstance(result, Project) + assert isinstance(result, ProjectManager) assert result.id == sample_project.id expected_call = {"id": project_id, field_name: field_value} @@ -233,7 +233,7 @@ def test_update_project_single_field(self, projects_manager, mock_adapter, field def test_update_filters_none_values(self, projects_manager, mock_adapter): """Test update method only includes non-None values in update data.""" - from fluidize.managers.project_manager import Project + from fluidize.managers.project import ProjectManager sample_project = SampleProjects.standard_project() project_id = "filter-none-test" @@ -247,7 +247,7 @@ def test_update_filters_none_values(self, projects_manager, mock_adapter): status=None, # Should be filtered out ) - assert isinstance(result, Project) + assert isinstance(result, ProjectManager) assert result.id == sample_project.id mock_adapter.projects.upsert.assert_called_once_with( id=project_id, @@ -278,7 +278,7 @@ def test_adapter_error_propagation(self, projects_manager, mock_adapter): def test_manager_adapter_delegation(self, mock_adapter): """Test that manager properly delegates to adapter methods.""" - manager = Projects(mock_adapter) + manager = RegistryManager(mock_adapter) # Ensure manager stores adapter correctly assert manager.adapter is mock_adapter @@ -302,7 +302,7 @@ def test_manager_adapter_delegation(self, mock_adapter): def test_manager_interface_compatibility(self, mock_adapter): """Test that manager provides expected interface methods.""" - manager = Projects(mock_adapter) + manager = RegistryManager(mock_adapter) # Verify all expected methods exist assert hasattr(manager, "create") @@ -318,9 +318,9 @@ def test_manager_interface_compatibility(self, mock_adapter): def test_project_wrapper_return_types(self, mock_adapter): """Test that manager methods return Project wrapper instances.""" - from fluidize.managers.project_manager import Project + from fluidize.managers.project import ProjectManager - manager = Projects(mock_adapter) + manager = RegistryManager(mock_adapter) sample_project = SampleProjects.standard_project() mock_adapter.projects.upsert.return_value = sample_project mock_adapter.projects.retrieve.return_value = sample_project @@ -328,30 +328,30 @@ def test_project_wrapper_return_types(self, mock_adapter): # Test create returns Project wrapper created_project = manager.create("test-create") - assert isinstance(created_project, Project) + assert isinstance(created_project, ProjectManager) assert created_project.id == sample_project.id # Test get returns Project wrapper retrieved_project = manager.get("test-get") - assert isinstance(retrieved_project, Project) + assert isinstance(retrieved_project, ProjectManager) assert retrieved_project.id == sample_project.id # Test list returns list of Project wrappers projects_list = manager.list() assert isinstance(projects_list, list) assert len(projects_list) == 1 - assert isinstance(projects_list[0], Project) + assert isinstance(projects_list[0], ProjectManager) assert projects_list[0].id == sample_project.id # Test update returns Project wrapper updated_project = manager.update("test-update", label="New Label") - assert isinstance(updated_project, Project) + assert isinstance(updated_project, ProjectManager) assert updated_project.id == sample_project.id def test_project_wrapper_graph_property_access(self, mock_adapter): """Test that Project wrapper provides graph property access.""" - manager = Projects(mock_adapter) + manager = RegistryManager(mock_adapter) sample_project = SampleProjects.standard_project() mock_adapter.projects.upsert.return_value = sample_project mock_adapter.graph = Mock() # Mock graph handler diff --git a/tests/unit/managers/test_simulation.py b/tests/unit/managers/test_simulation.py new file mode 100644 index 0000000..7137e98 --- /dev/null +++ b/tests/unit/managers/test_simulation.py @@ -0,0 +1,87 @@ +"""Unit tests for Simulations Manager - high-level simulation library interface.""" + +from unittest.mock import Mock, patch + +import pytest + +from fluidize.managers.simulations import SimulationsManager + + +class TestSimulationsManager: + """Test suite for SimulationsManager class.""" + + @pytest.fixture + def mock_adapter(self): + """Create a mock adapter for testing.""" + adapter = Mock() + return adapter + + @pytest.fixture + def simulations_manager(self, mock_adapter): + """Create a SimulationsManager instance for testing.""" + with patch("fluidize.managers.simulations.FluidizeSDK"): + return SimulationsManager(mock_adapter) + + @patch("fluidize.managers.simulations.FluidizeSDK") + def test_init(self, mock_sdk_class, mock_adapter): + """Test SimulationsManager initialization.""" + manager = SimulationsManager(mock_adapter) + + assert manager._adapter is mock_adapter + assert manager.fluidize_sdk is not None + mock_sdk_class.assert_called_once() + + @patch("fluidize.managers.simulations.FluidizeSDK") + def test_list_simulations_returns_list(self, mock_sdk_class, simulations_manager): + """Test that list_simulations returns a list.""" + # Arrange + mock_sdk_instance = mock_sdk_class.return_value + simulations_manager.fluidize_sdk = mock_sdk_instance + # Create a mock simulation object with model_dump method + mock_simulation = Mock() + mock_simulation.model_dump.return_value = { + "name": "Test Simulation", + "id": "sim_001", + "description": "A test simulation", + "date": "2024-01-01", + "version": "1.0.0", + "authors": [], + "tags": [], + } + mock_sdk_instance.simulation.list_simulations.return_value = [mock_simulation] + + # Act + result = simulations_manager.list_simulations() + + # Assert + assert isinstance(result, list) + mock_sdk_instance.simulation.list_simulations.assert_called_once_with(sim_global=True) + + @patch("fluidize.managers.simulations.FluidizeSDK") + def test_list_simulations_empty_list(self, mock_sdk_class, simulations_manager): + """Test that list_simulations handles empty results.""" + # Arrange + mock_sdk_instance = mock_sdk_class.return_value + simulations_manager.fluidize_sdk = mock_sdk_instance + mock_sdk_instance.simulation.list_simulations.return_value = [] + + # Act + result = simulations_manager.list_simulations() + + # Assert + assert result == [] + mock_sdk_instance.simulation.list_simulations.assert_called_once_with(sim_global=True) + + @patch("fluidize.managers.simulations.FluidizeSDK") + def test_list_simulations_sdk_delegation(self, mock_sdk_class, simulations_manager): + """Test that list_simulations properly delegates to SDK.""" + # Arrange + mock_sdk_instance = mock_sdk_class.return_value + simulations_manager.fluidize_sdk = mock_sdk_instance + mock_sdk_instance.simulation.list_simulations.return_value = [] + + # Act + simulations_manager.list_simulations() + + # Assert + mock_sdk_instance.simulation.list_simulations.assert_called_once_with(sim_global=True) diff --git a/tests/unit/test_run_flow_direct.py b/tests/unit/test_run_flow_direct.py index a4d65ca..9650685 100644 --- a/tests/unit/test_run_flow_direct.py +++ b/tests/unit/test_run_flow_direct.py @@ -13,7 +13,7 @@ from fluidize.config import FluidizeConfig from fluidize.core.types.project import ProjectSummary from fluidize.core.types.runs import RunFlowPayload -from fluidize.managers.project_manager import Project +from fluidize.managers.project import ProjectManager @pytest.fixture @@ -79,7 +79,7 @@ def local_adapter(test_config): @pytest.fixture def project_manager(local_adapter, project_from_file): """Create a Project manager instance for testing.""" - return Project(local_adapter, project_from_file) + return ProjectManager(local_adapter, project_from_file) class TestRunFlowDirect: @@ -300,7 +300,7 @@ def test_run_flow_empty_graph(self, local_adapter, test_config): # Load project using from_file like the other tests project_summary = ProjectSummary.from_file(empty_project_dir) - project_manager = Project(local_adapter, project_summary) + project_manager = ProjectManager(local_adapter, project_summary) payload = RunFlowPayload(name="empty_graph_test") diff --git a/utils/interactive-testing.ipynb b/utils/interactive-testing.ipynb deleted file mode 100644 index 51c23f1..0000000 --- a/utils/interactive-testing.ipynb +++ /dev/null @@ -1,464 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Fluidize-Python Interactive Demo\n", - "\n", - "This notebook demonstrates the fluidize-python library for managing scientific computing projects.\n", - "\n", - "## Setup\n", - "\n", - "First, let's import the client and see where our projects will be stored:" - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "\ud83d\udcc1 Projects will be stored in: /Users/henrybae/.fluidize/projects\n", - "\ud83d\udcc1 Base directory: /Users/henrybae/.fluidize\n", - "\ud83d\ude80 Client ready in 'local' mode!\n" - ] - } - ], - "source": [ - "# Import the fluidize client - handlers auto-register!\n", - "from fluidize.client import FluidizeClient\n", - "\n", - "# Create client and config\n", - "client = FluidizeClient(mode=\"local\")\n", - "\n", - "print(f\"\ud83d\udcc1 Projects will be stored in: {client.config.local_projects_path}\")\n", - "print(f\"\ud83d\udcc1 Base directory: {client.config.local_base_path}\")\n", - "print(f\"\ud83d\ude80 Client ready in '{client.mode}' mode!\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 1. Creating Projects\n", - "\n", - "Let's create some projects with different configurations:" - ] - }, - { - "cell_type": "code", - "execution_count": 19, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "\u2705 Created project 1:\n", - " ID: MUJOCO\n", - " Label: MUJOCO DEMO\n", - " Status: active\n" - ] - } - ], - "source": [ - "# Create a comprehensive project\n", - "project1 = client.projects.create(\n", - " project_id=\"MUJOCO\",\n", - " label=\"MUJOCO DEMO\",\n", - " description=\"A MuJoCo simulation project\",\n", - " status=\"active\",\n", - ")\n", - "\n", - "print(\"\u2705 Created project 1:\")\n", - "print(f\" ID: {project1.id}\")\n", - "print(f\" Label: {project1.label}\")\n", - "print(f\" Status: {project1.status}\")" - ] - }, - { - "cell_type": "code", - "execution_count": 20, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "\ud83d\udccb Found 2 projects:\n", - "\n", - " 1. data-pipeline-2024\n", - " Label: Data Processing Pipeline\n", - " Status: active\n", - " Description: A comprehensive data processing pipeline for custo...\n", - "\n", - " 2. MUJOCO\n", - " Label: MUJOCO DEMO\n", - " Status: active\n", - " Description: A MuJoCo simulation project\n", - "\n" - ] - } - ], - "source": [ - "# Get all projects\n", - "projects = client.projects.list()\n", - "\n", - "print(f\"\ud83d\udccb Found {len(projects)} projects:\")\n", - "print()\n", - "\n", - "for i, project in enumerate(projects, 1):\n", - " print(f\"{i:2}. {project.id}\")\n", - " print(f\" Label: {project.label}\")\n", - " print(f\" Status: {project.status}\")\n", - " if project.description:\n", - " print(f\" Description: {project.description[:50]}{'...' if len(project.description) > 50 else ''}\")\n", - " print()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 2. Creating Nodes" - ] - }, - { - "cell_type": "code", - "execution_count": 7, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "\ud83d\udce6 Imported all node creation types successfully!\n" - ] - } - ], - "source": [ - "# Import required types for creating nodes from scratch\n", - "import datetime\n", - "\n", - "from fluidize.core.types.graph import GraphNode, Position, graphNodeData\n", - "from fluidize.core.types.node import nodeMetadata_simulation, nodeProperties_simulation\n", - "\n", - "print(\"\ud83d\udce6 Imported all node creation types successfully!\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Creating Nodes in Two Ways\n", - "\n", - "Fluidize supports two approaches for creating nodes in your project graph:\n", - "\n", - "1. **`add_node()`** - For nodes that use existing simulation templates\n", - "2. **`add_node_from_scratch()`** - Complete node creation with all files and directories\n", - "\n", - "Let's demonstrate the comprehensive `add_node_from_scratch` approach that creates:\n", - "- Graph node entry in `graph.json`\n", - "- Complete `properties.yaml` with container configuration\n", - "- Rich `metadata.yaml` with authors, tags, and references\n", - "- Source directory (optionally cloned from a repository)" - ] - }, - { - "cell_type": "code", - "execution_count": 15, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "\ud83c\udfaf Working with project: MUJOCO DEMO\n", - "\ud83d\udcca Current graph state: 1 nodes, 0 edges\n" - ] - } - ], - "source": [ - "# Get our MUJOCO project for node creation\n", - "project = client.projects.get(\"MUJOCO\")\n", - "\n", - "print(f\"\ud83c\udfaf Working with project: {project.label}\")\n", - "print(f\"\ud83d\udcca Current graph state: {len(project.graph.get().nodes)} nodes, {len(project.graph.get().edges)} edges\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Complete Node Creation Example\n", - "\n", - "Let's create a comprehensive MuJoCo simulation node with all metadata:" - ] - }, - { - "cell_type": "code", - "execution_count": 12, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "\u2705 Created GraphNode:\n", - " ID: Mujoco-Simulation\n", - " Label: MuJoCo Humanoid Simulation\n", - " Position: (150.0, 100.0)\n" - ] - } - ], - "source": [ - "# 1. Create the GraphNode (defines position and basic info in the graph)\n", - "mujoco_graph_node = GraphNode(\n", - " id=\"Mujoco-Simulation\",\n", - " position=Position(x=150.0, y=100.0),\n", - " data=graphNodeData(label=\"MuJoCo Humanoid Simulation\"),\n", - " type=\"physics-simulation\",\n", - ")\n", - "\n", - "print(\"\u2705 Created GraphNode:\")\n", - "print(f\" ID: {mujoco_graph_node.id}\")\n", - "print(f\" Label: {mujoco_graph_node.data.label}\")\n", - "print(f\" Position: ({mujoco_graph_node.position.x}, {mujoco_graph_node.position.y})\")" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "\u2705 Created nodeProperties:\n", - " Container: nvidia/cuda:11.8-devel-ubuntu20.04\n", - " Mount path: source\n", - " Output folder: source/outputs\n", - " Should run: True\n" - ] - } - ], - "source": [ - "# 2. Create nodeProperties (container and execution configuration)\n", - "mujoco_properties = nodeProperties_simulation(\n", - " # Required fields\n", - " container_image=\"\", # Docker image with CUDA support\n", - " simulation_mount_path=\"source\", # Mount path inside container\n", - " source_output_folder=\"source/outputs\", # Where outputs are stored\n", - ")\n", - "\n", - "mujoco_metadata = nodeMetadata_simulation(\n", - " name=\"MuJoCo Humanoid Locomotion Demo\",\n", - " id=\"mujoco-humanoid-demo\",\n", - " version=\"0.1\",\n", - " description=\"A demo simulation of a humanoid in MuJoCo\",\n", - " date=datetime.date.today(),\n", - " authors=[],\n", - " tags=[],\n", - ")\n", - "\n", - "print(\"\u2705 Created nodeProperties:\")\n", - "print(f\" Container: {mujoco_properties.container_image}\")\n", - "print(f\" Mount path: {mujoco_properties.simulation_mount_path}\")\n", - "print(f\" Output folder: {mujoco_properties.source_output_folder}\")\n", - "print(f\" Should run: {mujoco_properties.should_run}\")" - ] - }, - { - "cell_type": "code", - "execution_count": 14, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "\ud83d\ude80 Creating node from scratch...\n", - "Warning: Required field 'authors' is missing or empty in nodeMetadata\n", - "Info: Optional field 'image_name' not provided in nodeProperties\n", - "Info: Optional field 'last_run' not provided in nodeProperties\n", - "Info: Optional field 'code_url' not provided in nodeMetadata\n", - "Info: Optional field 'paper_url' not provided in nodeMetadata\n", - "Info: Optional field 'tags' not provided in nodeMetadata\n", - "\u2705 Node created successfully!\n", - " Node ID: Mujoco-Simulation\n", - " Label: MuJoCo Humanoid Simulation\n", - " Type: physics-simulation\n", - " Simulation ID: None\n" - ] - } - ], - "source": [ - "# 4. Create the complete node from scratch!\n", - "print(\"\ud83d\ude80 Creating node from scratch...\")\n", - "\n", - "created_node = project.graph.add_node_from_scratch(\n", - " node=mujoco_graph_node,\n", - " node_properties=mujoco_properties,\n", - " node_metadata=mujoco_metadata,\n", - ")\n", - "\n", - "print(\"\u2705 Node created successfully!\")\n", - "print(f\" Node ID: {created_node.id}\")\n", - "print(f\" Label: {created_node.data.label}\")\n", - "print(f\" Type: {created_node.type}\")\n", - "print(f\" Simulation ID: {created_node.data.simulation_id}\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 3. Retrieving Specific Projects\n", - "\n", - "Get detailed information about a specific project:" - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "\ud83d\udcca Project Details:\n", - " ID: MUJOCO\n", - " Label: MUJOCO DEMO\n", - " Description: A MuJoCo simulation project\n", - " Status: active\n", - " Location: \n", - " Metadata Version: 1.0\n" - ] - } - ], - "source": [ - "# Get project details\n", - "project = client.projects.get(\"MUJOCO\")\n", - "\n", - "print(\"\ud83d\udcca Project Details:\")\n", - "print(f\" ID: {project.id}\")\n", - "print(f\" Label: {project.label}\")\n", - "print(f\" Description: {project.description}\")\n", - "print(f\" Status: {project.status}\")\n", - "print(f\" Location: {project.location}\")\n", - "print(f\" Metadata Version: {project.metadata_version}\")" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "scrolled": true - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "No start node provided, using first node: Mujoco-Simulation\n", - "BFS traversal starting from node 'Mujoco-Simulation':\n", - " - Adding node to traversal: Mujoco-Simulation, previous node: None\n", - "Nodes to run: ['Mujoco-Simulation']\n", - "Created project run folder: /Users/henrybae/.fluidize/projects/MUJOCO/runs/run_10\n", - "Created run environment with number: 10\n" - ] - }, - { - "data": { - "text/plain": [ - "{'flow_status': 'running', 'run_number': 10}" - ] - }, - "execution_count": 4, - "metadata": {}, - "output_type": "execute_result" - }, - { - "name": "stderr", - "output_type": "stream", - "text": [ - "No parameters.json found for node Mujoco-Simulation\n" - ] - }, - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Executing node Mujoco-Simulation in run 10\n", - "\n", - "=== Starting run for node: Mujoco-Simulation ===\n", - "1. Preparing environment...\n", - "\ud83d\udd0d [Environment] DEBUG: Attempting to load parameters from /Users/henrybae/.fluidize/projects/MUJOCO/runs/run_10/Mujoco-Simulation\n", - "\ud83d\udd0d [Environment] DEBUG: Loaded param_data = {'metadata': {'description': 'Parameter tuning for pinata motor strength', 'version': '1.0'}, 'parameters': [{'name': 'motor_strength', 'value': '20.0', 'type': 'text', 'label': 'Motor Strength', 'description': 'Control signal strength for bat motor (higher = faster swing, more collision force)', 'scope': 'simulation', 'location': ['source/pinata_simulation.py']}]}\n", - "\ud83d\udd0d [Environment] DEBUG: all_params = [{'name': 'motor_strength', 'value': '20.0', 'type': 'text', 'label': 'Motor Strength', 'description': 'Control signal strength for bat motor (higher = faster swing, more collision force)', 'scope': 'simulation', 'location': ['source/pinata_simulation.py']}]\n", - "\ud83d\udd0d [Environment] DEBUG: Found 1 simulation params, 0 properties params\n", - "\ud83d\udd0d [Environment] DEBUG: simulation_params = [{'name': 'motor_strength', 'value': '20.0', 'type': 'text', 'label': 'Motor Strength', 'description': 'Control signal strength for bat motor (higher = faster swing, more collision force)', 'scope': 'simulation', 'location': ['source/pinata_simulation.py']}]\n", - "\ud83d\udd0d [Environment] DEBUG: properties_params = []\n", - "\ud83d\udd0d [Environment] DEBUG: context = {'motor_strength': '20.0'}\n", - "\ud83d\udd0d [Environment] DEBUG: param_locations = {'motor_strength': ['source/pinata_simulation.py']}\n", - "\ud83d\udd0d [Environment] DEBUG: node_run_folder = /Users/henrybae/.fluidize/projects/MUJOCO/runs/run_10/Mujoco-Simulation\n", - "\ud83d\udd0d [Environment] DEBUG: checking location_path = /Users/henrybae/.fluidize/projects/MUJOCO/runs/run_10/Mujoco-Simulation/source/pinata_simulation.py, exists = True\n", - "\ud83d\udd27 [Environment] Processing 1 targeted files (vs exhaustive search)\n", - "\ud83d\udd0d [Environment] DEBUG: files_to_process = [PosixUPath('/Users/henrybae/.fluidize/projects/MUJOCO/runs/run_10/Mujoco-Simulation/source/pinata_simulation.py')]\n", - "\ud83d\udcdd [Environment] Updated parameters in: source/pinata_simulation.py\n", - "2. Executing simulation...\n", - "3. Handling files...\n", - "=== Run completed for node: Mujoco-Simulation with result: True ===\n", - "\n" - ] - } - ], - "source": [ - "from fluidize.core.types.runs import RunFlowPayload\n", - "\n", - "payload = RunFlowPayload(\n", - " name=\"simulation-run-1\", description=\"Running simulation flow\", tags=[\"simulation\", \"analysis\"]\n", - ")\n", - "\n", - "\n", - "project.runs.run_flow(payload)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 4. Updating Projects\n", - "\n", - "Modify existing projects:" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "fluidize-python", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.10.18" - } - }, - "nbformat": 4, - "nbformat_minor": 4 -} diff --git a/utils/start_jupyter.sh b/utils/start_jupyter.sh deleted file mode 100755 index c85ab0a..0000000 --- a/utils/start_jupyter.sh +++ /dev/null @@ -1,53 +0,0 @@ -#!/bin/bash -# Fluidize-Python Jupyter Notebook Launcher - -set -e # Exit on error - -echo "🚀 Starting Fluidize-Python Jupyter Notebook" -echo "=============================================" - -# Change to project root directory (parent of utils) -cd "$(dirname "$0")/.." - -# Ensure uv environment is set up -echo "📦 Setting up uv environment..." -if ! command -v uv &> /dev/null; then - echo "❌ uv not found. Please install uv first: https://github.com/astral-sh/uv" - exit 1 -fi - -# Check if environment exists and is up to date -if [ ! -d ".venv" ] || [ "pyproject.toml" -nt ".venv/pyvenv.cfg" ]; then - echo "📦 Setting up/updating uv environment..." - uv sync - echo "📦 Installing package in development mode..." - uv run pip install -e . -else - echo "📦 Using existing uv environment (up to date)" -fi - -# Check if jupyter is installed in the uv environment -echo "📚 Ensuring Jupyter is available..." -if ! uv run jupyter --version &> /dev/null; then - echo "📚 Adding Jupyter to uv environment..." - uv add --dev jupyter -else - echo "📚 Jupyter already available" -fi - -# Show environment info -echo "🐍 Python: $(which python)" -echo "📁 Projects directory: ~/.fluidize/projects/" -echo "📓 Notebook: utils/fluidize_demo.ipynb" -echo "📂 Current directory: $(pwd)" -echo "" - -# Start Jupyter notebook from project root -echo "🌟 Starting Jupyter Notebook..." -echo " The notebook will open in your browser automatically" -echo " Press Ctrl+C to stop the server" -echo "" - -# Start Jupyter from the project root so imports work correctly -# The notebook will be available at utils/fluidize_demo.ipynb -uv run jupyter notebook --notebook-dir=. utils/fluidize_demo.ipynb diff --git a/uv.lock b/uv.lock index b5e98e6..e1f8dcf 100644 --- a/uv.lock +++ b/uv.lock @@ -759,7 +759,7 @@ dev = [ requires-dist = [ { name = "asciitree", specifier = ">=0.3.3" }, { name = "docker", specifier = ">=7.1.0" }, - { name = "fluidize-sdk", specifier = ">=0.4.0" }, + { name = "fluidize-sdk", specifier = ">=0.6.0" }, { name = "jinja2", specifier = ">=3.1.6" }, { name = "mlflow", specifier = ">=3.1.4" }, { name = "networkx", specifier = ">=3.2.1" }, @@ -786,7 +786,7 @@ dev = [ [[package]] name = "fluidize-sdk" -version = "0.4.0" +version = "0.6.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "anyio" }, @@ -796,9 +796,9 @@ dependencies = [ { name = "sniffio" }, { name = "typing-extensions" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/90/ef/167f581b140afec073ee4961b18fe347c090477b3263a5276656539cf9af/fluidize_sdk-0.4.0.tar.gz", hash = "sha256:0b80ff0a561fadf01cf7da6a1c0b425bfec287af9d31a8596d6e3cd15e476381", size = 118050, upload-time = "2025-08-10T10:16:28.617Z" } +sdist = { url = "https://files.pythonhosted.org/packages/f0/6d/41a511df1e9f2cfb6526a349162ea7032ef2ef3d4ced52b8ff81a036ca84/fluidize_sdk-0.6.0.tar.gz", hash = "sha256:cb4548e3ebac5c949b177a6f352edc91be5ffd82900dd0267159027ffb895633", size = 119144, upload-time = "2025-08-16T09:45:34.48Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/cf/f4/8d1b1f630058a951bd5702f29069ea6f1bf1396d3649663a3752f7f4b699/fluidize_sdk-0.4.0-py3-none-any.whl", hash = "sha256:501e7093f998cecc171e95e57d4f4e9d207820859f13bdb8a0a45e1c72d9f21b", size = 120117, upload-time = "2025-08-10T10:16:27.245Z" }, + { url = "https://files.pythonhosted.org/packages/e9/6e/25a81bd993b6dec0733e7f4f13f199a40e3500222907bfab54fc5e946eea/fluidize_sdk-0.6.0-py3-none-any.whl", hash = "sha256:22db9101b88bbf0adea113057f96ce9436985ce3b9e635fedcdb421fff399ab2", size = 121524, upload-time = "2025-08-16T09:45:32.874Z" }, ] [[package]]