End-to-end Edge AI inference validation pipeline (Forge → Runtime → Lab → AIGuard)
C++ runtime · provenance · deployment decisions
Pinned Loading
-
gwonxhj/InferEdge
gwonxhj/InferEdge PublicMulti-repository entrypoint for the InferEdge local-first Edge AI inference validation pipeline.
Shell
-
gwonxhj/InferEdgeForge
gwonxhj/InferEdgeForge PublicBuild provenance and handoff layer for ONNX-to-edge artifacts in the InferEdge validation pipeline.
Python
-
gwonxhj/InferEdge-Runtime
gwonxhj/InferEdge-Runtime PublicC++ runtime execution and Jetson/ONNX Runtime evidence export layer for Edge AI inference validation.
C++
-
gwonxhj/InferEdgeLab
gwonxhj/InferEdgeLab PublicAnalysis, Local Studio, and deployment decision layer for local-first Edge AI inference validation.
-
gwonxhj/InferEdgeAIGuard
gwonxhj/InferEdgeAIGuard PublicOptional deterministic diagnosis evidence layer for explaining risky Edge AI inference outputs.
Python
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.
