AdaTask: A Task-Aware Adaptive Learning Rate Approach to Multi-Task Learning. AAAI, 2023.
-
Updated
Sep 29, 2023 - Python
AdaTask: A Task-Aware Adaptive Learning Rate Approach to Multi-Task Learning. AAAI, 2023.
No Parameters Left Behind: Sensitivity Guided Adaptive Learning Rate for Training Large Transformer Models (ICLR 2022)
A Julia package for adaptive proximal gradient and primal-dual algorithms
A Julia package for adaptive proximal gradient for convex bilevel optimization
Deep Filtering with adaptive learning rates
Theory-derived PyTorch optimizer matching Adam with zero tuning. τ* = κ√(σ²/λ) — validated on CIFAR-10 and CIFAR-100
Implementation of a two-layer perceptron (from scratch) with four back-propagation methods in Python
A simple feedforward neural network for classifying 2D data from Gaussian distributions. Includes training via backpropagation, momentum, adaptive learning rate, and visual decision boundary plots. Built for a Pattern Recognition course.
Implementation of the WAME (Weight-wise Adaptive learning rates with Moving average Estimator) optimization algorithm for TensorFlow version 2.0 or higher.
Optimize learning rates dynamically with the Syntonic optimizer, enhancing performance beyond Adam without the need for hyperparameter tuning.
Add a description, image, and links to the adaptive-learning-rate topic page so that developers can more easily learn about it.
To associate your repository with the adaptive-learning-rate topic, visit your repo's landing page and select "manage topics."