FusedSCEquiTensorPot is an E(3)-equivariant neural potential for atomistic modeling with multiple tensor-product backends, explicit external-field conditioning, physical-tensor supervision, multi-fidelity training, and direct LAMMPS deployment.
- Backends: spherical, channelwise spherical (
spherical-save-cue), partial Cartesian, sparse Cartesian, ICTD, and strict-paritypure-cartesian-ictd-o3. - Field-aware learning: electric field (
1o), magnetic field (1e), and rank-aware tensor inputs can be embedded into equivariant message passing. - Physical tensor targets: charge, dipole, polarizability, quadrupole, BEC, and magnetic moment.
- Multi-fidelity: graph-level fidelity conditioning, delta-learning (
delta-baseline), per-fidelity weighting, and per-fidelity metrics. - Deployment: export to
core.pt, run throughUSER-MFFTORCHor ML-IAP, and support runtime field / fidelity control in LAMMPS.
- Multiple Cartesian and spherical equivariant trunks under one training/evaluation stack.
- External-field-aware tensor learning, including explicit parity-sensitive O(3) modeling.
- End-to-end workflow from preprocessing and training to
core.ptexport and LAMMPS runtime. - Research-oriented extensions such as long-range prototypes, active learning, NEB, phonons, and thermal transport.
Dataset notes and conversion examples (rMD17, ANI-1x, QM7-X, SPICE, generic HDF5) are in USAGE.md (中文) and USAGE_EN.md (English).
pip install -e .For a reproducible Linux CUDA setup with pinned PyTorch/cuEquivariance/PyG wheels:
bash scripts/install_pt271_cu128.sh
pip install -e .Optional extras:
pip install -e ".[cue]"forspherical-save-cuepip install -e ".[pyg]"for faster PyG scatter / neighbor opspip install -e ".[al]"for SOAP-based active learning diversitypip install -e ".[thermal]"for thermal transport (phono3py,scipy)
mff-preprocess --input-file data.xyz --output-dir dataTo skip neighbor list preprocessing (for quick sanity-check):
mff-preprocess --input-file data.xyz --output-dir data --skip-h5If your extxyz uses custom field names, you can override them explicitly:
mff-preprocess \
--input-file custom.extxyz \
--output-dir data \
--energy-key REF_energy \
--force-key REF_force \
--species-key elem \
--coord-key coords \
--atomic-number-key atomic_numberMinimal training:
mff --train --data-dir data --epochs 1000 --batch-size 8 --device cudaRecommended backbone examples:
# Memory-efficient ICTD
mff --train --data-dir data --device cuda --tensor-product-mode pure-cartesian-ictd
# Full-parity O(3) ICTD
mff --train --data-dir data --device cuda --tensor-product-mode pure-cartesian-ictd-o3
# Sparse Cartesian
mff --train --data-dir data --device cuda --tensor-product-mode pure-cartesian-sparseField-aware training examples:
# Electric field + dipole/polarizability
mff --train --data-dir data --tensor-product-mode pure-cartesian-sparse \
--external-tensor-rank 1 --external-field-file data/efield.npy \
--physical-tensors dipole,polarizability \
--dipole-file data/dipole.npy --polarizability-file data/pol.npy \
--physical-tensor-weights "dipole:2.0,polarizability:1.0"
# Magnetic field (1e) + magnetic moment
mff --train --data-dir data --tensor-product-mode pure-cartesian-ictd-o3 \
--external-tensor-rank 1 --external-tensor-irrep 1e \
--o3-irrep-preset auto \
--o3-active-irreps '0e,1e,2e' \
--external-field-file data/bfield.npy \
--physical-tensors magnetic_moment \
--magnetic-moment-file data/magnetic_moment.npy
# Simultaneous electric field (1o) + magnetic field (1e)
mff --train --data-dir data --tensor-product-mode pure-cartesian-ictd-o3 \
--external-tensor-rank 1 --external-tensor-irrep 1o \
--external-field-file data/efield.npy \
--magnetic-field-file data/bfield.npy \
--o3-irrep-preset auto \
--o3-active-irreps '0e,1e,1o,2e'
Optional ZBL short-range repulsion:
mff --train --data-dir data --tensor-product-mode pure-cartesian-ictd \
--zbl-enabled \
--zbl-inner-cutoff 0.6 \
--zbl-outer-cutoff 1.2 \
--zbl-exponent 0.23 \
--zbl-energy-scale 1.0Supported modes:
spherical-save-cuepure-cartesian-ictdpure-cartesian-ictd-o3pure-cartesian-sparsepure-cartesian-sparse-save
Conditioning-only multi-fidelity:
mff --train \
--data-dir data \
--tensor-product-mode pure-cartesian-ictd \
--num-fidelity-levels 2 \
--fidelity-id-file data/train_fidelity_id.npy \
--fidelity-loss-weights '0:1.0,1:3.0'Delta-learning multi-fidelity:
mff --train \
--data-dir data \
--tensor-product-mode pure-cartesian-ictd-o3 \
--num-fidelity-levels 2 \
--multi-fidelity-mode delta-baseline \
--fidelity-id-file data/train_fidelity_id.npy \
--fidelity-loss-weights '0:1.0,1:3.0' \
--delta-regularization-weight 1e-4Merge multiple processed HDF5 files into one multi-fidelity dataset:
mff --merge-multifidelity \
--inputs data/processed_pbe.h5 data/processed_hse.h5 \
--fidelity-ids 0 1 \
--output-h5 data/processed_train_mf.h5 \
--output-fidelity-npy data/train_fidelity_id.npyTrain with LES-style long-range (mesh_fft, recommended first-stage settings):
# 3D periodic reciprocal long-range
mff-train --data-dir data --tensor-product-mode pure-cartesian-ictd \
--long-range-mode reciprocal-spectral-v1 \
--long-range-reciprocal-backend mesh_fft \
--long-range-boundary periodic \
--long-range-mesh-size 16 \
--long-range-green-mode poisson \
--long-range-energy-partition potential \
--long-range-assignment cic
# Slab reciprocal long-range: x/y periodic + z vacuum padding
mff-train --data-dir data --tensor-product-mode pure-cartesian-ictd \
--long-range-mode reciprocal-spectral-v1 \
--long-range-reciprocal-backend mesh_fft \
--long-range-boundary slab \
--long-range-mesh-size 16 \
--long-range-slab-padding-factor 2 \
--long-range-green-mode poisson \
--long-range-energy-partition potential \
--long-range-assignment cicNotes:
- Supported training architectures:
pure-cartesian-ictd,spherical-save-cue - Recommended first use: keep
--long-range-green-mode poisson ASEactive learning now supports the sameperiodic/slabboundary semantics for the Python calculator path
By default, dynamic loss weights a/b are clamped to [1, 1000] (they change during training). You can override the range:
mff-train --data-dir data --a 10.0 --b 100.0 --update-param 750 --weight-a-growth 1.05 --weight-b-decay 0.98 --a-max 1000 --b-min 1 --b-max 1000 Optional: override baseline atomic energies (E0):
# from CSV (Atom,E0)
mff-train --data-dir data --atomic-energy-file data/fitted_E0.csv
# or directly from CLI
mff-train --data-dir data --atomic-energy-keys 1 6 7 8 --atomic-energy-values -430.53 -821.03 -1488.19 -2044.35Evaluate a trained model. The recommended default is to let mff-evaluate restore model-structure hyperparameters and tensor_product_mode from the checkpoint automatically:
mff-evaluate --checkpoint combined_model.pth --test-prefix test --output-prefix test --use-h5If you explicitly pass conflicting structure arguments such as --tensor-product-mode, --embedding-dim, --output-size, or --invariant-channels, the CLI takes precedence over the checkpoint. For new checkpoints, mff-evaluate can also restore atomic_energy_keys/atomic_energy_values directly from the checkpoint; older checkpoints still fall back to local fitted_E0.csv behavior. Only pass those arguments when you intentionally want to override the checkpoint configuration.
Outputs include:
test_loss.csvtest_energy.csvtest_force.csv
Optional: use --compile e3trans to accelerate evaluation with torch.compile.
For molecular dynamics simulation:
mff-evaluate --checkpoint combined_model.pth --md-simFor NEB (Nudged Elastic Band) calculations:
mff-evaluate --checkpoint combined_model.pth --nebFor phonon spectrum (Hessian, vibrational frequencies):
mff-evaluate --checkpoint combined_model.pth --phonon --phonon-input structure.xyzOptional: stress training (PBC with stress/virial in XYZ):
mff-train --data-dir data -c 0.1 --input-file pbc_with_stress.xyzGrow your training set automatically where the potential is under-sampled: one CLI runs the full train → explore → select → label (DFT) → merge loop. Works on a single machine (PySCF, VASP, …) or on HPC (SLURM, one job per structure).
# Local: PySCF, 8 parallel workers
mff-active-learn --explore-type ase --explore-mode md --label-type pyscf \
--pyscf-method b3lyp --pyscf-basis 6-31g* \
--label-n-workers 8 --md-steps 500 --n-iterations 5
# HPC: SLURM, one job per structure
mff-active-learn --explore-type ase --label-type slurm \
--slurm-template dft_job.sh --slurm-partition cpu \
--slurm-nodes 1 --slurm-ntasks 32 --slurm-time 04:00:00📖 Full CLI & options: USAGE.md (中文) · USAGE_EN.md (English) · ACTIVE_LEARNING.md (backends, multi-stage, FAQ).
Long-range aware active learning is also supported through the ASE calculator path. In practice, use checkpoints trained with:
--long-range-mode reciprocal-spectral-v1 \
--long-range-reciprocal-backend mesh_fft \
--long-range-green-mode poisson \
--long-range-energy-partition potential \
--long-range-assignment cicFor slab systems, additionally set:
--long-range-boundary slab \
--long-range-slab-padding-factor 2FusedSCEquiTensorPot supports three LAMMPS integration methods:
| Method | Speed | Requirements | Use Case |
|---|---|---|---|
| USER-MFFTORCH (LibTorch pure C++) | Fastest, no Python/GIL | LAMMPS built with KOKKOS + USER-MFFTORCH | HPC, clusters, production |
| ML-IAP unified | Faster (~1.7x vs fix external) | LAMMPS built with ML-IAP | Recommended, GPU support |
| fix external / pair_style python | Slower | Standard LAMMPS + Python | Quick validation, no ML-IAP |
USER-MFFTORCH loads TorchScript models via LibTorch C++ API directly. No Python at runtime, suitable for HPC and production deployment.
-
Export core.pt (one-time, requires Python):
mff-export-core --checkpoint model.pth --elements H O --device cuda \ --e0-csv fitted_E0.csv --out core.pt
mff-export-corerestores structure hyperparameters such astensor_product_mode,max_radius, andnum_interactionfrom the checkpoint by default. It now embeds E0 by default as well. New checkpoints storeatomic_energy_keys/atomic_energy_values, so checkpoint E0 is usually enough; if--e0-csvis passed explicitly, the CLI wins. Older checkpoints still fall back to localfitted_E0.csv. Use--no-embed-e0only if you explicitly want to export network energy without E0. If the checkpoint was trained with ZBL enabled, the exportedcore.ptwill include the same short-range ZBL correction automatically. -
Build LAMMPS: Enable
PKG_KOKKOSandPKG_USER-MFFTORCH. See lammps_user_mfftorch/docs/BUILD_AND_RUN.md. -
Run (pure LAMMPS, no Python):
lmp -k on g 1 -sf kk -pk kokkos newton off neigh full -in in.mfftorch
LAMMPS input example:
pair_style mff/torch 5.0 cuda
pair_coeff * * /path/to/core.pt H O
If core.pt came from a checkpoint with ZBL enabled, no extra LAMMPS keyword is needed: the ZBL short-range repulsion is already embedded in the exported TorchScript model.
For checkpoints exported with external-field architecture, USER-MFFTORCH supports runtime rank-1 external tensors and follows the exported irrep semantics:
variable Ex equal 0.0
variable Ey equal 0.0
variable Ez equal 0.01
pair_style mff/torch 5.0 cuda field v_Ex v_Ey v_Ez
pair_coeff * * /path/to/core.pt H O
For magnetic-field-style 1e checkpoints, use mfield instead of field:
variable Bx equal 0.0
variable By equal 0.0
variable Bz equal 0.01
pair_style mff/torch 5.0 cuda mfield v_Bx v_By v_Bz
pair_coeff * * /path/to/core.pt H O
The rank-1 variables are re-evaluated on each force call, so time-dependent equal-style variables are supported. field is the runtime keyword for 1o-style vectors such as electric field; mfield is the runtime keyword for 1e-style axial vectors such as magnetic field. Current limitation: runtime external tensors are implemented for rank-1 and rank-2.
When a core.pt is exported with simultaneous rank-1 electric and magnetic fields, provide both keywords in the same pair_style line:
variable Ex equal 0.0
variable Ey equal 0.0
variable Ez equal 0.01
variable Bx equal 0.0
variable By equal 0.0
variable Bz equal 0.02
pair_style mff/torch 5.0 cuda field v_Ex v_Ey v_Ez mfield v_Bx v_By v_Bz
pair_coeff * * /path/to/core.pt H O
For multi-fidelity core.pt, runtime fidelity is passed through pair_style mff/torch fidelity ...:
pair_style mff/torch 5.0 cuda fidelity 1
pair_coeff * * /path/to/core.pt H O
or with an equal-style variable:
variable fid equal 1
pair_style mff/torch 5.0 cuda fidelity v_fid
pair_coeff * * /path/to/core.pt H O
If core.pt was exported with --export-fidelity-id, the fidelity branch is frozen during export and you should not pass fidelity at runtime.
field6 / field9 remain mutually exclusive with field / mfield.
For rank-2 runtime external tensors, USER-MFFTORCH supports both:
field9: full3x3tensor in row-major orderxx xy xz yx yy yz zx zy zzfield6: symmetric3x3shorthand in orderxx yy zz xy xz yz
Example:
variable Txx equal 1.0
variable Txy equal 0.0
variable Txz equal 0.0
variable Tyx equal 0.0
variable Tyy equal 1.0
variable Tyz equal 0.0
variable Tzx equal 0.0
variable Tzy equal 0.0
variable Tzz equal 1.0
pair_style mff/torch 5.0 cuda field9 v_Txx v_Txy v_Txz v_Tyx v_Tyy v_Tyz v_Tzx v_Tzy v_Tzz
pair_coeff * * /path/to/core.pt H O
Model support: pure-cartesian-ictd series and spherical-save-cue only.
Export ML-IAP format (requires LAMMPS built with ML-IAP):
python -m molecular_force_field.cli.export_mliap checkpoint.pth --elements H O \
--atomic-energy-keys 1 8 --atomic-energy-values -13.6 -75.0 --output model-mliap.ptSupported models: spherical, spherical-save, spherical-save-cue, pure-cartesian-ictd, pure-cartesian-ictd-save.
Notes:
spherical-save-cueis automatically exported through the TorchScript path inexport_mliap, even if--torchscriptis not specified explicitly. This is now the default safe behavior because the plain Python pickle path is not stable for this mode.pure-cartesianandpure-cartesian-sparseare still not supported byexport_mliap.export_mliapalso restores structure hyperparameters from the checkpoint by default. If conflicting CLI values are passed explicitly, the CLI wins.- For new checkpoints,
export_mliapcan also restoreatomic_energy_keys/atomic_energy_valuesdirectly from the checkpoint. Older checkpoints still fall back to localfitted_E0.csv. - If the checkpoint was trained with ZBL enabled, the exported
model-mliap.ptcarries the same ZBL correction automatically.
For crystalline systems, the recommended thermal-conductivity route is:
MLFF -> IFC2/IFC3IFC2/IFC3 -> intrinsic lattice thermal conductivityviaphono3pyintrinsic BTE -> engineering scattering / fast generalizationvia a Callaway-style post-process
This workflow is intentionally separate from mff-evaluate --phonon. The phonon mode is useful for Hessian and stability checks, while the thermal workflow is meant for actual transport calculations.
Install thermal deps: pip install -e ".[thermal]"
Minimal intrinsic BTE example:
python -m molecular_force_field.cli.thermal_transport bte \
--checkpoint best_model.pth \
--structure relaxed.cif \
--supercell 4 4 4 \
--phonon-supercell 4 4 4 \
--mesh 16 16 16 \
--temperatures 300 400 500 600 700 \
--output-dir thermal_bte \
--device cuda \
--atomic-energy-file fitted_E0.csvMinimal Callaway post-process example:
python -m molecular_force_field.cli.thermal_transport callaway \
--kappa-hdf5 thermal_bte/kappa-m161616.hdf5 \
--output-prefix thermal_bte/callaway \
--component xx \
--grain-size-nm 200 \
--point-defect-coeff 1.0e-4Outputs include fc2.hdf5, fc3.hdf5, kappa-*.hdf5, and Callaway CSV/JSON summaries.
For the detailed workflow, fitting strategy, and engineering notes, see THERMAL_TRANSPORT.md.
See LAMMPS_INTERFACE.md for full documentation.
rebuild/
├── molecular_force_field/ # Main package
│ ├── models/ # Model definitions (eight tensor product modes)
│ │ ├── e3nn_layers.py # Spherical mode (e3nn-based)
│ │ ├── e3nn_layers_channelwise.py # spherical-save
│ │ ├── cartesian_e3_layers.py # partial-cartesian, partial-cartesian-loose
│ │ ├── pure_cartesian*.py # pure-cartesian, pure-cartesian-sparse
│ │ ├── pure_cartesian_ictd*.py # pure-cartesian-ictd
│ │ ├── cue_layers*.py # spherical-save-cue (cuEquivariance)
│ │ ├── mlp.py, losses.py
│ │ └── ...
│ ├── data/ # Dataset and preprocessing
│ │ ├── datasets.py, preprocessing.py, collate.py
│ │ └── ...
│ ├── utils/ # Configuration, graph utilities
│ │ ├── config.py, graph_utils.py, scatter.py, checkpoint_metadata.py
│ │ └── ...
│ ├── training/ # Trainer
│ │ ├── trainer.py, schedulers.py
│ │ └── ...
│ ├── evaluation/ # Evaluator, ASE Calculator
│ │ ├── evaluator.py, calculator.py
│ │ └── ...
│ ├── active_learning/ # Active learning loop
│ │ ├── loop.py # Main AL loop (train → explore → select → label → merge)
│ │ ├── train_ensemble.py # Multi-model parallel training (DDP, cross-node)
│ │ ├── labeling.py # DFT labelers (PySCF, VASP, script, SLURM, ...)
│ │ ├── diversity_selector.py # SOAP / devi_hist + FPS
│ │ ├── exploration.py, model_devi.py, data_merge.py, stage_scheduler.py
│ │ ├── init_data.py # Cold-start perturbation
│ │ └── ...
│ ├── thermal/ # Thermal transport (IFC2/IFC3, BTE, Callaway)
│ │ ├── model_loader.py, callaway.py
│ │ └── ...
│ ├── interfaces/ # LAMMPS potential, ML-IAP
│ │ ├── lammps_potential.py # fix external / pair_style python
│ │ └── lammps_mliap.py # ML-IAP unified
│ └── cli/ # Command-line interfaces
│ ├── train.py # mff-train (supports --n-gpu, --nnodes)
│ ├── preprocess.py # mff-preprocess
│ ├── evaluate.py # mff-evaluate (static/MD/NEB/phonon)
│ ├── active_learning.py # mff-active-learn
│ ├── init_data.py # mff-init-data (cold-start)
│ ├── lammps_interface.py # mff-lammps (fix external)
│ ├── export_libtorch_core.py # mff-export-core
│ ├── export_mliap.py # ML-IAP export
│ ├── inference_ddp.py # Large-scale multi-GPU inference
│ ├── thermal_transport.py # IFC2/IFC3, BTE, Callaway
│ └── evaluate_pes_coverage.py # PES coverage (SOAP)
├── lammps_user_mfftorch/ # LAMMPS LibTorch package (USER-MFFTORCH)
│ ├── src/USER-MFFTORCH/ # pair_style mff/torch source
│ └── docs/BUILD_AND_RUN.md # Build and run guide
├── scripts/ # Install scripts, smoke tests
├── test/ # Unit tests, benchmarks
└── docs/ # Additional docs (LAMMPS, thermal)
- Python >= 3.8
- PyTorch >= 2.0.0
- e3nn >= 0.5.0
- ASE >= 3.22.0
- See
requirements.txtfor full list
The library supports eight equivariant tensor product modes, each optimized for different use cases:
spherical: e3nn-based spherical harmonics (default, standard implementation)spherical-save: channelwise edge conv (e3nn backend; fewer params)spherical-save-cue: channelwise edge conv (cuEquivariance backend; optional, GPU accelerated)partial-cartesian: Cartesian coordinates + CG coefficients (strictly equivariant)partial-cartesian-loose: Approximate equivariant (norm product approximation)pure-cartesian: Pure Cartesian (3^L) representation (strictly equivariant, very slow)pure-cartesian-sparse: Sparse pure Cartesian (strictly equivariant, parameter-optimized)pure-cartesian-ictd: ICTD irreps internal representation (strictly equivariant, fastest, fewest parameters)
All modes maintain O(3) equivariance (including rotation and reflection). Performance comparison:
| Mode | Equivariance | Speed (CPU) l=0-6 | Speed (GPU) common configuration | Parameters | Equivariance Error | Use Case |
|---|---|---|---|---|---|---|
spherical |
Strict | 1.00x (baseline) | 1.00x (baseline) | 100% (baseline) | ~1e-15 | Default, maximum compatibility, research/publication |
spherical-save-cue |
Strict | - | 16x | 32.6% (-67.4%) | ~1e-15 | Different GCN structure, design for highest speed MD. CAN NOT compare with other modes |
partial-cartesian |
Strict | 0.16x-1.06x | 0.75x | 82.6% (-17.4%) | ~1e-14 | Strict equivariance with fewer parameters |
partial-cartesian-loose |
Approximate | 0.17x-1.37x | 1.15x | 82.7% (-17.3%) | ~1e-15 | Fast iteration, approximate equivariance acceptable |
pure-cartesian-sparse |
SO3 Strict | 0.53x-1.39x | 1.17x | 70.4% (-29.6%) | ~1e-15 | Best balance: fewer params, stable performance |
pure-cartesian-ictd |
Strict | 1.58x-4.12x (fastest) | 5.0x | 27.9% (-72.1%) | ~1e-12 | Best overall: fewest params, fastest on CPU/GPU, strictly equivariant |
pure-cartesian |
Strict | 0.02x-0.36x (slowest) | 0.06x | 514.0% (+414%) | ~1e-14 | ❌ Not recommended (too slow, too many params) |
*CPU benchmark: channels=64, lmax=0-6, 32 atoms, 256 edges, float64. Speed shown is total training time (forward+backward) acceleration ratio relative to spherical.
*GPU benchmark: channels=64, lmax=0-6, 32 atoms, 256 edges, RTX 3090, float64. Speed shown is total training time (forward+backward) acceleration ratio relative to spherical.
*spherical-save-cue uses a different GCN structure and cannot be directly compared with other modes.
All modes pass O(3) equivariance tests (including parity/reflection, error < 1e-6).
- Speed + Memory: Use
pure-cartesian-ictd(1.58x-4.12x faster, 72.1% fewer parameters, all lmax) - High Precision: Use
sphericalorpure-cartesian-sparse(equivariance error ~1e-15) - Best Balance: Use
pure-cartesian-sparse(0.53x-1.39x, 29.6% fewer params, strict equivariance) - Standard Baseline: Use
spherical(highest precision, standard implementation)
- Speed + Memory: Use
pure-cartesian-ictd(5.0x faster, 72.1% fewer parameters, lmax≤3) - High Precision: Use
sphericalorpure-cartesian-sparse(equivariance error ~1e-15) - Best Balance: Use
pure-cartesian-sparse(1.17x faster, 29.6% fewer params, strict equivariance) - Avoid:
pure-cartesian(too slow, fails at lmax≥4)
For detailed performance comparison and recommendations, see USAGE.md.
Dataset: Five nitrogen oxide and carbon structure reaction pathways from NEB (Nudged Elastic Band) calculations, filtered to fmax=0.2, totaling 2,788 structures. Test set: 1-2 complete or incomplete structures per reaction.
| Model | Configuration | Mode | Energy RMSE (mev/atom) |
Force RMSE (mev/Å) |
|---|---|---|---|---|
| MACE correction=3 | Lmax=2, 64ch | - | 0.13 | 11.6 |
| Lmax=2, 128ch | - | 0.12 | 11.3 | |
| Lmax=2, 198ch | - | 0.12 | 15.1 | |
| FSCETP | Lmax=2, 64ch | spherical | 0.044 | 7.4 |
| spherical-save-cue | 0.076 | 8.0 | ||
| partial-cartesian | 0.045 | 7.4 | ||
| partial-cartesian-loose | 0.048 | 8.4 | ||
| pure-cartesian-sparse | 0.044 ⭐ | 6.5 ⭐ | ||
| pure-cartesian-ictd | 0.046 | 9.0 |
Key Findings:
- Energy Accuracy: FSCETP achieves 66.2% lower energy RMSE than MACE correction=3 (64ch) (0.044 vs 0.13 mev/atom)
- Force Accuracy: FSCETP achieves 43.9% lower force RMSE than MACE correction=3 (64ch) (6.5 vs 11.6 mev/Å) with
pure-cartesian-sparse - Best Performance:
pure-cartesian-sparseachieves the best force RMSE (6.5 mev/Å) with competitive energy (0.044 mev/atom) - Efficiency:
pure-cartesian-ictdachieves competitive accuracy (Energy: 0.046, Force: 9.0) with 72.1% fewer parameters and 5.0x faster training speed
- USAGE.md - Full CLI and hyperparameter reference (Chinese)
- USAGE_EN.md - Full CLI and hyperparameter reference (English)
- LAMMPS_INTERFACE.md - LAMMPS integration guide (LibTorch, ML-IAP, fix external)
- THERMAL_TRANSPORT.md - MLFF thermal conductivity workflow (
IFC2/IFC3 -> BTE -> Callaway) - lammps_user_mfftorch/docs/BUILD_AND_RUN.md - LibTorch interface build and run
MIT License
This framework implements eight equivariant tensor product modes:
sphericalandspherical-savemodes: Built on e3nn for spherical harmonics-based tensor productsspherical-save-cuemode: Uses cuEquivariance for GPU-accelerated channelwise spherical convolutionpartial-cartesianandpartial-cartesian-loosemodes: Partially use e3nn's Clebsch-Gordan coefficients and Irreps framework- Self-implemented Cartesian modes:
pure-cartesian,pure-cartesian-sparse,pure-cartesian-ictdare independently implemented without e3nn dependencies
Other dependencies and inspirations:
- Uses ASE for molecular simulations
- Inspired by NequIP, MACE, and other equivariant neural potentials
If you use this library in your research, please cite:
@software{fused_sc_equitensorpot,
title = {FusedSCEquiTensorPot},
version = {0.1.0},
url = {https://github.com/Parity-LRX/FusedSCEquiTensorPot}
}