Getting Started

Browse resources for getting started with RL Swarm.

Overview

RL Swarm is a peer-to-peer reinforcement-learning swarm you can run on a laptop or a GPU server. This page is an orientation & navigation hub.

Start by picking your OS below and follow the platform guide. If something breaks, start at Troubleshooting or join the Gensyn Discord for help.

Before you Begin

RL Swarm is available across several environments: Windows (via WSL 2), Linux (Ubuntu 22.04+), and macOS (Intel and Apple Silicon).

System Requirements

  • A 64-bit arm64 or x86 CPU with at least 32 GB RAM, or an officially supported NVIDIA GPU (3090, 4090, 5090, A100, H100)

  • Python 3.10+

  • Docker and Git installed & configured

  • Stable internet connection

Accounts & Access

  • Hugging Face account: You’ll need a Write-access API token to participate in the swarm.

  • Gensyn Testnet account: Automatically created when you first log in through your browser during setup.

Supported Platforms & Installation Paths

FAQ

Can I run RL Swarm on my laptop?

Yes. If you don’t have a GPU, you’ll run in CPU mode. It will be slower, but you’ll still participate in the swarm.

What happens when I log in?

You’ll create an on-chain identity (via Alchemy) and a local swarm.pem file that identifies your peer.

What if I want to switch machines?

Copy your swarm.pem file to the new machine. That preserves your peer identity and animal name.

What if I encounter errors or crashes?

Check the Troubleshooting guide first, or visit our Discord for help.

Resources

If you need additional set-up or post-launch support, you can open a ticket or visit our Discord.

Common setup issues and fixes.

Ask questions, report issues, or chat with the team.

Learn more about the AI Prediction Market and current experiments.

Track your peer’s training and on-chain activity.

Last updated