PyNNLF (Python for Network Net Load Forecasting) is a tool to evaluate net load forecasting model performance in a reliable and reproducible way.
This tool evaluates net load forecasting models aiming to make new net load forecasting research more reliable and reproducible. It includes a library of public net load datasets and common forecasting models, including simple benchmark models. Users input the forecast problem and model specification, and the tool outputs evaluation results.
It also allows users to add datasets, models, and modify hyperparameters. Researchers claiming a new or superior model can compare their model with existing ones on public datasets. The target audience includes researchers in academia or industry focused on evaluating and optimizing net load forecasting models.
A visual illustration of the tool workflow is shown below.
- Forecast Target: Dataset and forecast horizon defined in the YAML spec at
example_project/specs/experiment.yaml. - Model Specification: Model and hyperparameters defined in the YAML spec at
example_project/specs/experiment.yaml.
a1_experiment_result.csv– Contains accuracy (cross-validated test n-RMSE), stability (accuracy standard deviation), and training time.a2_hyperparameter.csv– Lists the hyperparameters used for each model.a3_cross_validation_result.csv– Detailed results for each cross-validation split.cv_plots/– Folder with plots including:- Observation vs forecast (time plot)
- Observation vs forecast (scatter plot)
- Residual time plot
- Residual histogram
cv_test/andcv_train/– Folders containing time series of observations, forecasts, and residuals for each cross-validation split.
Format:
[experiment_no]_[experiment_date]_[dataset]_[forecast_horizon]_[model]_[hyperparameter]
Example:
E00001_250915_ds0_fh30_m6_lr_hp1
-
Install the package: On macOS, use
python3/pip3ifpython/pipare not available.python -m pip install pynnlf
-
Initialize a workspace in any directory you want (example name:
my_project). By default, only the sample dataset (ds0) is included. You can choose to download all datasets if needed: On macOS, usepython3 -cifpython -cis not available.python -c "import pynnlf; pynnlf.init('my_project')" -
Set up your experiment in
my_project/specs/experiment.yaml. -
Run the experiment:
python -c "import pynnlf; pynnlf.run_experiment('my_project/specs/experiment.yaml')" -
View results under
my_project/experiment_result.
CI (Continuous Integration) is automated testing that runs on code changes. CI is available to run smoke tests on 3 models and check whether results fall within the standard benchmark.
The tool will output the evaluation result in my_project/experiment_result as one folder.
Detailed documentation including examples, testing, detailed guide, API reference, features & limitations, etc. can be seen here. PyNNLF Documentation
This project is part of Samhan's PhD study, supported by the University International Postgraduate Award (UIPA) Scholarship from UNSW, the Industry Collaboration Project Scholarship from Ausgrid, and the RACE for 2030 Scholarship. We also acknowledge Solcast and the Australian Bureau of Meteorology (BOM) for providing access to historical weather datasets for this research. We further acknowledge the use of Python libraries including Pandas, NumPy, PyTorch, Scikit-learn, XGBoost, Prophet, Statsmodels, and Matplotlib. Finally, we thank the reviewers and editor of the Journal of Open Source Software for their valuable feedback and guidance.
The authors declare that they have no competing financial, personal, or professional interests related to this work.
- M. Syahman Samhan (m.samhan@unsw.edu.au): Lead developer and researcher. Responsible for conceptualization, implementation, documentation, and experimentation.
- Anna Bruce: Supervisor. Provided guidance on research direction and methodology.
- Baran Yildiz: Supervisor. Provided guidance on research direction and methodology.
