Core Components

Learn about the four core components that make up the Gensyn protocol.

The Four Layers

The Gensyn Protocol is built on four foundational components that together enable decentralized, verifiable machine learning at global scale.

Each layer contributes a distinct capability, from deterministic execution to economic coordination, and is represented by active research projects and products across the Gensyn ecosystem.

Consistent ML Execution

Ensuring reproducibility and compatibility across any device

To verify computation performed across thousands of heterogeneous machines, each node must execute machine learning workloads in a consistent and deterministic way.

This layer defines a framework for uniform execution, ensuring that identical inputs always produce identical outputs, regardless of hardware, drivers, or precision differences.

  • SAPO: A reinforcement learning algorithm designed for stable policy optimization across distributed nodes.

Together, these projects form the execution substrate that makes trustless verification possible.

Trustless Verification

Checking and agreeing on work performed in a scalable way

Once tasks can be executed deterministically, they must be verified without relying on trusted intermediaries.

The verification layer provides a refereed-delegation system that detects and resolves disagreements between compute providers and verifiers so the network can always reach consensus on correct results.

  • Verde: A library of bitwise-reproducible ML operators (RepOps) used to guarantee deterministic results.

  • Judge: A cryptographically verifiable AI evaluator that enforces correctness at the application layer.

Verde provides the theoretical framework, while Judge demonstrates its practical application for real-world AI workloads.

Peer-to-Peer Communication

Sharing workloads efficiently between devices over the internet

Coordinating large-scale training over untrusted, bandwidth-limited networks requires new communication primitives.

This layer defines decentralized, fault-tolerant methods for distributing gradients, synchronizing models, and recovering from failure, all without centralized orchestration.

  • NoLoCo: Replaces the costly all-reduce step with a low-communicaton gossip approach for distributed training.

  • CheckFree: Enables fault-tolerant recovery without checkpointing to reduce compute overhead.

  • SkipPipe: Introduces an efficient gradient-sharing algorithm that minimizes 'message hops' across the network.

These methods form Gensyn’s communication backbone, allowing the distributed compute to operate as one cohesive training system.

Decentralized Coordination

Aligning incentives, orchestrating participation, and settling payments

At the highest level, the coordination layer ensures that the network remains open, fair, and economically sustainable.

It identifies participants, aligns incentives through tokenized rewards, and executes payments over a permissionless Ethereum rollup that forms the protocol's economic engine.

  • RL Swarm: A framework for collaborative reinforcement learning and collective intelligence.

  • Testnet: The live decentralized network where compute providers, verifiers, and researchers participate in open coordination, such as by training models and commiting blockchain transactions with BlockAssist.

This layer links the protocol’s technical foundations with the incentive systems that keep the network running and growing.

Last updated