Hello, thank you for your great work.
I am studying your code in transformer.py, you just use encoder_layers and it uses normalize_before decide whether to use position encoding. You set normalize_before as False all the time. So, do you use positional encoding? If I understand wrong, please let me know, Thank you.
Hello, thank you for your great work.
I am studying your code in transformer.py, you just use encoder_layers and it uses normalize_before decide whether to use position encoding. You set normalize_before as False all the time. So, do you use positional encoding? If I understand wrong, please let me know, Thank you.