Skip to content

RLink is a lightweight and efficient communication layer designed specifically for distributed reinforcement learning systems.

License

Notifications You must be signed in to change notification settings

matrix97317/RLink

Repository files navigation

RLink

项目Logo

RLink is a lightweight, high-performance communication layer specifically designed for distributed reinforcement learning systems. It enables seamless data exchange between actors (environment interaction) and learners (model training), decoupling sampling from training to scale your RL experiments efficiently.

✨ Key Features

🚀 Low-Latency Communication – Optimized for fast transfer of trajectories, actions, observations, and model parameters

📈 Scalability – Supports many-to-one and one-to-many communication patterns for flexible scaling

🔌 Easy Integration – Simple API to connect existing RL frameworks and training pipelines

🌍 Language-Agnostic Design – Currently supports Python with plans for C++/Rust backends

🛡️ Fault-Tolerant – Optional reliability features to handle intermittent connection drops

🎯 Why RLink?

Building distributed RL systems often involves complex communication infrastructure. RLink simplifies this by providing a dedicated, optimized layer that:

  • Decouples sampling and training processes

  • Accelerates experimentation across multiple processes or machines

  • Reduces infrastructure overhead

  • Enables seamless scaling of actors and learners

📊 Architecture Overview

┌─────────────────┐     ┌─────────────────┐     ┌─────────────────┐
│                 │     │                 │     |                 |
│   RL Actors     │────▶│    RLink        │────▶│   RL Learners   │
│  (Sampling)     │◀────│  Communication  │◀────│   (Training)    │
│                 │     │     Layer       │     │                 │
└─────────────────┘     └─────────────────┘     └─────────────────┘
arch

🚀 Quick Start

Installation

pip install rlinks

Basic Usage

As a actor

from rlinks.actor import RLinkActor

actor = RLinkActor("http://learner-ip:8443")

# Send data to learner.
data = {
        "image_0": np.random.randint(0, 255, (480, 640, 3), dtype=np.uint8),
        "action": np.random.randint((50, 14)).astype(np.float32),
        "index": 0,
}

for i in range(4):
    data["index"] = i
    actor.put(data)

# Get model from learner.
models = actor.get_remote_model()

As as Learner

# To start the leaner, you can either run it directly in a terminal or daemonize it to run in the background.
rlinks learner --gpu-num 8 --port 8443

rlinks learner --help
from rlinks.dataset import RLinkDataset

class YourDataset:
    def __init__(self):
        self._rl_dataset = RLinkDataset(gpu_id=torch.cuda.current_device())

    def __getitem__(self,idx):
        data = self._rl_dataset.__getitem__(idx)
from rlinks.learner import RLinkSyncModel

RLinkSyncModel.sync("your model path")

📚 Use Cases

Distributed RL Training – Scale to hundreds of parallel environments

Multi-Agent Systems – Coordinate communication between agents

Federated RL – Train across distributed data sources

Hybrid Cloud/Edge Training – Deploy actors and learners across different infrastructure

🔄 Communication Patterns

Pattern Description Use Case
Many-to-One Multiple actors → Single learner Centralized training
One-to-Many Single learner → Multiple actors Parameter distribution
Bidirectional Two-way communication Advanced coordination

🛠️ Integration with Popular Frameworks

📈 Performance Benchmarks

🔮 Roadmap

🤝 Contributing

We welcome contributions! Please see our Contributing Guidelines for details. CONTRIBUTING

📄 License

RLink is released under the MIT License. See LICENSE for details LICENSE.

📞 Support & Community

📖 Documentation

🐛 Issue Tracker

💬 Discord Community

🐦 Twitter Updates

About

RLink is a lightweight and efficient communication layer designed specifically for distributed reinforcement learning systems.

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •