Currently, the method to dump ML model weights in the kernel is to use the Jinja2 template for temporary convenience in (#190). However, it would be redundant to introduce this dependency into the current system since this could be done simply with the Python f-string as well, and the ML stack needed to be separated from the current OS stack. Will remove.
Currently, the method to dump ML model weights in the kernel is to use the Jinja2 template for temporary convenience in (#190). However, it would be redundant to introduce this dependency into the current system since this could be done simply with the Python f-string as well, and the ML stack needed to be separated from the current OS stack. Will remove.