Re-creation of model from Real-Time Guitar Amplifier Emulation with Deep Learning
See my blog post for a more in depth description along with song demos.
data/in.wav - Concatenation of a few samples from the
IDMT-SMT-Guitar dataset
data/ts9_out.wav - Recorded output of in.wav after being passed through an
Ibanez TS9 Tube Screamer (all knobs at 12 o'clock).
models/pedalnet.ckpt - Pretrained model weights
Run effect on .wav file: Must be single channel, 44.1 kHz
# must be same data used to train
python prepare_data.py data/in.wav data/out_ts9.wav
# specify input file and desired output file
python predict.py my_input_guitar.wav my_output.wav
# if you trained you own model you can pass --model flag
# with path to .ckptTrain:
python prepare_data.py data/in.wav data/out_ts9.wav # or use your own!
python train.py
python train.py --gpus "0,1" # for multiple gpus
python train.py -h # help (see for other hyperparameters)Test:
python test.py # test pretrained model
python test.py --model lightning_logs/version_{X}/epoch={EPOCH}.ckpt # test trained modelCreates files y_test.wav, y_pred.wav, and x_test.wav, for the ground truth
output, predicted output, and input signal respectively.