Skip to content

Implement true GRU and RNN runtime kernels and layer classes in kernel.cu#39

Open
RemindCZ wants to merge 1 commit intomasterfrom
codex/opravit-implementaci-gru-a-rnn
Open

Implement true GRU and RNN runtime kernels and layer classes in kernel.cu#39
RemindCZ wants to merge 1 commit intomasterfrom
codex/opravit-implementaci-gru-a-rnn

Conversation

@RemindCZ
Copy link
Copy Markdown
Owner

Motivation

  • GRU and RNN were previously thin placeholders that reused LSTM internals and thus did not implement correct gate math or dimensions.
  • Ensure runtime polymorphism so GRULayer/RNNLayer behave correctly when created/initialized through base pointers.

Description

  • Added dedicated CUDA kernels kGRUGatesForward, kGRUGatesBackward, kRNNForward, and kRNNBackward implementing GRU (update/reset/candidate) and vanilla RNN (tanh) math.
  • Implemented full GRULayer and RNNLayer overrides (Init, InitFromData, Forward, Backward, Update, SaveBest, RestoreBest) that use correct gate dimensions (3 * hidden_size for GRU, 1 * hidden_size for RNN) and integrate with existing cuBLAS/cu kernels.
  • Made LSTMLayer methods FreeAll, Init, and InitFromData virtual so derived layers' implementations are invoked through base-class pointers.
  • Changes are restricted to kernel.cu and aim only to correct recurrent-layer runtime/gradient behavior and memory layouts.

Testing

  • Searched and inspected symbols with rg to verify the new kernels and layer overrides (kGRUGatesForward, kRNNForward, virtual Init), which succeeded.
  • Performed static file inspection (nl / code review) to confirm insertion points and gate-dimension adjustments, which succeeded.
  • Attempted to compile with nvcc -std=c++17 -c kernel.cu but compilation could not be run in this environment because nvcc is not available; no runtime binary tests were executed.

Codex Task

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant