# Get Started

## Quickstart Guide

Everything you need to go from zero to your first receipt: **\[1]** prerequisites, **\[2]** installation, and **\[3]** a guided first run.&#x20;

{% hint style="info" %}
REE supports reproducible inference on models up to 72B parameters, with pipeline parallelism available for models that exceed a single GPU's memory. This provides a benefit on multi-GPU hosts.&#x20;
{% endhint %}

### Prerequisites

* [**Docker**](https://www.docker.com/get-started/) installed and running.
* [**Python 3**](https://www.python.org/downloads/) installed on your machine.
* **Disk Space Requirements:** The compressed REE container image is roughly 7 GB. When uncompressed, it occupies approximately 12 GB on disk.&#x20;
* **NVIDIA GPU Driver Requirements:** Linux requires version **570.00+** and Windows requires **572.16+**. Check your current driver version with `nvidia-smi`.&#x20;

{% hint style="info" %}
To update your drivers, visit [NVIDIA Driver Downloads.](https://www.nvidia.com/en-us/drivers/) If your system lacks a compatible GPU or driver, you can still execute `ree.sh` with the `--cpu-only` flag for CPU-only mode.&#x20;
{% endhint %}

### Installing REE

Clone the [GitHub repository](https://github.com/gensyn-ai/ree) and navigate into it:

```bash
git clone https://github.com/gensyn-ai/ree.git
cd ree
```

No additional installation or dependency management is required. The TUI handles pulling the REE container image automatically on your first run.

{% hint style="info" %}
The repository also includes `ree.sh`, a lower-level shell script that `ree.py` calls under the hood. You shouldn't need to use `ree.sh` directly unless you're debugging or working on an [advanced integration.](/tech/ree/advanced-usage.md)
{% endhint %}

### Launching the TUI

The TUI opens with an interactive form where you can configure and launch generations entirely from within the interface without the need to manually assemble CLI commands.

From the `ree` directory, run:

```bash
python3 ree.py
```

If you prefer the command line, REE can also be driven directly via `ree.sh` or the `gensyn-sdk` CLI without the TUI. This may be preferable if you're scripting, working in a CI pipeline, or using a coding agent like Claude Code. See the [Advanced Usage & CLI Reference](/tech/ree/advanced-usage.md) for the full CLI documentation.

### Your First Run

When the TUI launches, you'll see this form:

<figure><img src="/files/BDR8jn5hovhNakuQjQly" alt=""><figcaption></figcaption></figure>

To run your first generation:

1. Use the **arrow keys** to navigate to **Model Name** from this list and press **Enter** to edit it.
2. Navigate to **Prompt Text** and press **Enter**. Type a simple prompt like `Hello world`.
3. Set a **Max New Tokens** count.
4. Press **`r`** to run.

REE will pull the container image, prepare the model, run inference, and display a progress checklist:

<figure><img src="/files/eMD5INi6artxJ8ARwQMn" alt=""><figcaption></figcaption></figure>

Once complete, you'll see the **REE Output** section showing the [receipt](/tech/ree/receipts.md) file path and the model's generated text.

{% hint style="info" %}
If you used a Hugging Face test model or small-parameter model, the output may be nonsensical. This is expected, since these models either have random, untrained weights or consume too few tokens to produce a polished output. The important thing is that the pipeline ran successfully.
{% endhint %}

From here you can press **`e`** to reset and configure another run, **`r`** to re-run with the same settings, or **`q`** to quit.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.gensyn.ai/tech/ree/get-started.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
